00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2379 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3644 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.083 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.083 The recommended git tool is: git 00:00:00.084 using credential 00000000-0000-0000-0000-000000000002 00:00:00.085 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.109 Fetching changes from the remote Git repository 00:00:00.110 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.150 Using shallow fetch with depth 1 00:00:00.150 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.150 > git --version # timeout=10 00:00:00.195 > git --version # 'git version 2.39.2' 00:00:00.195 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.226 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.226 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.627 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.638 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.650 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.650 > git config core.sparsecheckout # timeout=10 00:00:04.664 > git read-tree -mu HEAD # timeout=10 00:00:04.679 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.696 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.696 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.792 [Pipeline] Start of Pipeline 00:00:04.805 [Pipeline] library 00:00:04.806 Loading library shm_lib@master 00:00:04.806 Library shm_lib@master is cached. Copying from home. 00:00:04.819 [Pipeline] node 00:00:04.831 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:04.833 [Pipeline] { 00:00:04.840 [Pipeline] catchError 00:00:04.841 [Pipeline] { 00:00:04.851 [Pipeline] wrap 00:00:04.858 [Pipeline] { 00:00:04.865 [Pipeline] stage 00:00:04.867 [Pipeline] { (Prologue) 00:00:04.884 [Pipeline] echo 00:00:04.886 Node: VM-host-SM9 00:00:04.892 [Pipeline] cleanWs 00:00:04.901 [WS-CLEANUP] Deleting project workspace... 00:00:04.901 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.907 [WS-CLEANUP] done 00:00:05.122 [Pipeline] setCustomBuildProperty 00:00:05.210 [Pipeline] httpRequest 00:00:05.664 [Pipeline] echo 00:00:05.666 Sorcerer 10.211.164.20 is alive 00:00:05.672 [Pipeline] retry 00:00:05.673 [Pipeline] { 00:00:05.682 [Pipeline] httpRequest 00:00:05.686 HttpMethod: GET 00:00:05.687 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.687 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.693 Response Code: HTTP/1.1 200 OK 00:00:05.695 Success: Status code 200 is in the accepted range: 200,404 00:00:05.695 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.182 [Pipeline] } 00:00:06.202 [Pipeline] // retry 00:00:06.208 [Pipeline] sh 00:00:06.486 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.498 [Pipeline] httpRequest 00:00:06.834 [Pipeline] echo 00:00:06.835 Sorcerer 10.211.164.20 is alive 00:00:06.844 [Pipeline] retry 00:00:06.846 [Pipeline] { 00:00:06.858 [Pipeline] httpRequest 00:00:06.862 HttpMethod: GET 00:00:06.863 URL: http://10.211.164.20/packages/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:00:06.863 Sending request to url: http://10.211.164.20/packages/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:00:06.864 Response Code: HTTP/1.1 200 OK 00:00:06.865 Success: Status code 200 is in the accepted range: 200,404 00:00:06.865 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:00:30.031 [Pipeline] } 00:00:30.050 [Pipeline] // retry 00:00:30.060 [Pipeline] sh 00:00:30.376 + tar --no-same-owner -xf spdk_d47eb51c960b88a8c704cc184fd594dbc3abad70.tar.gz 00:00:32.920 [Pipeline] sh 00:00:33.196 + git -C spdk log --oneline -n5 00:00:33.196 d47eb51c9 bdev: fix a race between reset start and complete 00:00:33.196 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:00:33.196 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:00:33.196 4bcab9fb9 correct kick for CQ full case 00:00:33.196 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:00:33.214 [Pipeline] withCredentials 00:00:33.225 > git --version # timeout=10 00:00:33.237 > git --version # 'git version 2.39.2' 00:00:33.253 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:33.255 [Pipeline] { 00:00:33.264 [Pipeline] retry 00:00:33.266 [Pipeline] { 00:00:33.281 [Pipeline] sh 00:00:33.560 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:33.571 [Pipeline] } 00:00:33.589 [Pipeline] // retry 00:00:33.594 [Pipeline] } 00:00:33.610 [Pipeline] // withCredentials 00:00:33.621 [Pipeline] httpRequest 00:00:34.035 [Pipeline] echo 00:00:34.038 Sorcerer 10.211.164.20 is alive 00:00:34.049 [Pipeline] retry 00:00:34.051 [Pipeline] { 00:00:34.066 [Pipeline] httpRequest 00:00:34.071 HttpMethod: GET 00:00:34.071 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:34.072 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:34.087 Response Code: HTTP/1.1 200 OK 00:00:34.087 Success: Status code 200 is in the accepted range: 200,404 00:00:34.088 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:51.236 [Pipeline] } 00:00:51.252 [Pipeline] // retry 00:00:51.259 [Pipeline] sh 00:00:51.541 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:52.930 [Pipeline] sh 00:00:53.210 + git -C dpdk log --oneline -n5 00:00:53.210 caf0f5d395 version: 22.11.4 00:00:53.210 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:00:53.210 dc9c799c7d vhost: fix missing spinlock unlock 00:00:53.210 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:00:53.210 6ef77f2a5e net/gve: fix RX buffer size alignment 00:00:53.227 [Pipeline] writeFile 00:00:53.242 [Pipeline] sh 00:00:53.522 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:53.534 [Pipeline] sh 00:00:53.814 + cat autorun-spdk.conf 00:00:53.814 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:53.814 SPDK_TEST_NVMF=1 00:00:53.814 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:53.814 SPDK_TEST_URING=1 00:00:53.814 SPDK_TEST_USDT=1 00:00:53.814 SPDK_RUN_UBSAN=1 00:00:53.814 NET_TYPE=virt 00:00:53.814 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:53.814 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:00:53.814 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:53.820 RUN_NIGHTLY=1 00:00:53.822 [Pipeline] } 00:00:53.836 [Pipeline] // stage 00:00:53.851 [Pipeline] stage 00:00:53.853 [Pipeline] { (Run VM) 00:00:53.865 [Pipeline] sh 00:00:54.146 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:54.146 + echo 'Start stage prepare_nvme.sh' 00:00:54.146 Start stage prepare_nvme.sh 00:00:54.146 + [[ -n 1 ]] 00:00:54.146 + disk_prefix=ex1 00:00:54.146 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:00:54.146 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:00:54.146 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:00:54.146 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:54.146 ++ SPDK_TEST_NVMF=1 00:00:54.146 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:54.146 ++ SPDK_TEST_URING=1 00:00:54.146 ++ SPDK_TEST_USDT=1 00:00:54.146 ++ SPDK_RUN_UBSAN=1 00:00:54.146 ++ NET_TYPE=virt 00:00:54.146 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:54.146 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:00:54.146 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:54.146 ++ RUN_NIGHTLY=1 00:00:54.146 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:54.146 + nvme_files=() 00:00:54.146 + declare -A nvme_files 00:00:54.146 + backend_dir=/var/lib/libvirt/images/backends 00:00:54.146 + nvme_files['nvme.img']=5G 00:00:54.146 + nvme_files['nvme-cmb.img']=5G 00:00:54.146 + nvme_files['nvme-multi0.img']=4G 00:00:54.146 + nvme_files['nvme-multi1.img']=4G 00:00:54.146 + nvme_files['nvme-multi2.img']=4G 00:00:54.146 + nvme_files['nvme-openstack.img']=8G 00:00:54.146 + nvme_files['nvme-zns.img']=5G 00:00:54.146 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:54.146 + (( SPDK_TEST_FTL == 1 )) 00:00:54.146 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:54.146 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:54.146 + for nvme in "${!nvme_files[@]}" 00:00:54.146 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:00:54.146 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:54.146 + for nvme in "${!nvme_files[@]}" 00:00:54.146 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:00:54.146 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:54.146 + for nvme in "${!nvme_files[@]}" 00:00:54.146 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:00:54.146 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:54.146 + for nvme in "${!nvme_files[@]}" 00:00:54.146 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:00:54.146 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:54.146 + for nvme in "${!nvme_files[@]}" 00:00:54.146 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:00:54.146 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:54.146 + for nvme in "${!nvme_files[@]}" 00:00:54.146 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:00:54.405 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:54.405 + for nvme in "${!nvme_files[@]}" 00:00:54.405 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:00:54.405 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:54.405 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:00:54.405 + echo 'End stage prepare_nvme.sh' 00:00:54.405 End stage prepare_nvme.sh 00:00:54.417 [Pipeline] sh 00:00:54.697 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:54.697 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:00:54.697 00:00:54.697 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:00:54.697 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:00:54.697 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:54.697 HELP=0 00:00:54.697 DRY_RUN=0 00:00:54.697 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:00:54.697 NVME_DISKS_TYPE=nvme,nvme, 00:00:54.697 NVME_AUTO_CREATE=0 00:00:54.697 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:00:54.697 NVME_CMB=,, 00:00:54.697 NVME_PMR=,, 00:00:54.697 NVME_ZNS=,, 00:00:54.697 NVME_MS=,, 00:00:54.697 NVME_FDP=,, 00:00:54.697 SPDK_VAGRANT_DISTRO=fedora39 00:00:54.697 SPDK_VAGRANT_VMCPU=10 00:00:54.697 SPDK_VAGRANT_VMRAM=12288 00:00:54.697 SPDK_VAGRANT_PROVIDER=libvirt 00:00:54.697 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:54.697 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:54.697 SPDK_OPENSTACK_NETWORK=0 00:00:54.697 VAGRANT_PACKAGE_BOX=0 00:00:54.697 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:54.697 FORCE_DISTRO=true 00:00:54.697 VAGRANT_BOX_VERSION= 00:00:54.697 EXTRA_VAGRANTFILES= 00:00:54.697 NIC_MODEL=e1000 00:00:54.697 00:00:54.698 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:00:54.698 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:57.233 Bringing machine 'default' up with 'libvirt' provider... 00:00:57.801 ==> default: Creating image (snapshot of base box volume). 00:00:57.801 ==> default: Creating domain with the following settings... 00:00:57.801 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731980528_4f3c0a2a3ca49e9d992c 00:00:57.801 ==> default: -- Domain type: kvm 00:00:57.801 ==> default: -- Cpus: 10 00:00:57.801 ==> default: -- Feature: acpi 00:00:57.801 ==> default: -- Feature: apic 00:00:57.801 ==> default: -- Feature: pae 00:00:57.801 ==> default: -- Memory: 12288M 00:00:57.801 ==> default: -- Memory Backing: hugepages: 00:00:57.801 ==> default: -- Management MAC: 00:00:57.801 ==> default: -- Loader: 00:00:57.801 ==> default: -- Nvram: 00:00:57.801 ==> default: -- Base box: spdk/fedora39 00:00:57.801 ==> default: -- Storage pool: default 00:00:57.801 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731980528_4f3c0a2a3ca49e9d992c.img (20G) 00:00:57.801 ==> default: -- Volume Cache: default 00:00:57.801 ==> default: -- Kernel: 00:00:57.801 ==> default: -- Initrd: 00:00:57.801 ==> default: -- Graphics Type: vnc 00:00:57.801 ==> default: -- Graphics Port: -1 00:00:57.801 ==> default: -- Graphics IP: 127.0.0.1 00:00:57.801 ==> default: -- Graphics Password: Not defined 00:00:57.801 ==> default: -- Video Type: cirrus 00:00:57.801 ==> default: -- Video VRAM: 9216 00:00:57.801 ==> default: -- Sound Type: 00:00:57.801 ==> default: -- Keymap: en-us 00:00:57.801 ==> default: -- TPM Path: 00:00:57.801 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:57.801 ==> default: -- Command line args: 00:00:57.801 ==> default: -> value=-device, 00:00:57.801 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:57.801 ==> default: -> value=-drive, 00:00:57.801 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:00:57.801 ==> default: -> value=-device, 00:00:57.801 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:57.801 ==> default: -> value=-device, 00:00:57.801 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:57.801 ==> default: -> value=-drive, 00:00:57.801 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:57.801 ==> default: -> value=-device, 00:00:57.801 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:57.801 ==> default: -> value=-drive, 00:00:57.801 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:57.801 ==> default: -> value=-device, 00:00:57.801 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:57.801 ==> default: -> value=-drive, 00:00:57.801 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:57.801 ==> default: -> value=-device, 00:00:57.801 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:57.801 ==> default: Creating shared folders metadata... 00:00:57.801 ==> default: Starting domain. 00:00:59.180 ==> default: Waiting for domain to get an IP address... 00:01:17.270 ==> default: Waiting for SSH to become available... 00:01:17.270 ==> default: Configuring and enabling network interfaces... 00:01:19.803 default: SSH address: 192.168.121.10:22 00:01:19.803 default: SSH username: vagrant 00:01:19.803 default: SSH auth method: private key 00:01:21.743 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:29.860 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:35.125 ==> default: Mounting SSHFS shared folder... 00:01:36.061 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:36.061 ==> default: Checking Mount.. 00:01:37.437 ==> default: Folder Successfully Mounted! 00:01:37.437 ==> default: Running provisioner: file... 00:01:38.373 default: ~/.gitconfig => .gitconfig 00:01:38.632 00:01:38.632 SUCCESS! 00:01:38.632 00:01:38.632 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:38.632 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:38.632 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:38.632 00:01:38.641 [Pipeline] } 00:01:38.655 [Pipeline] // stage 00:01:38.664 [Pipeline] dir 00:01:38.664 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:01:38.666 [Pipeline] { 00:01:38.678 [Pipeline] catchError 00:01:38.679 [Pipeline] { 00:01:38.690 [Pipeline] sh 00:01:38.969 + vagrant ssh-config --host vagrant 00:01:38.969 + sed -ne /^Host/,$p 00:01:38.969 + tee ssh_conf 00:01:42.252 Host vagrant 00:01:42.252 HostName 192.168.121.10 00:01:42.252 User vagrant 00:01:42.252 Port 22 00:01:42.252 UserKnownHostsFile /dev/null 00:01:42.252 StrictHostKeyChecking no 00:01:42.252 PasswordAuthentication no 00:01:42.252 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:42.252 IdentitiesOnly yes 00:01:42.252 LogLevel FATAL 00:01:42.252 ForwardAgent yes 00:01:42.252 ForwardX11 yes 00:01:42.252 00:01:42.265 [Pipeline] withEnv 00:01:42.267 [Pipeline] { 00:01:42.280 [Pipeline] sh 00:01:42.558 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:42.558 source /etc/os-release 00:01:42.558 [[ -e /image.version ]] && img=$(< /image.version) 00:01:42.558 # Minimal, systemd-like check. 00:01:42.558 if [[ -e /.dockerenv ]]; then 00:01:42.558 # Clear garbage from the node's name: 00:01:42.558 # agt-er_autotest_547-896 -> autotest_547-896 00:01:42.558 # $HOSTNAME is the actual container id 00:01:42.558 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:42.558 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:42.558 # We can assume this is a mount from a host where container is running, 00:01:42.558 # so fetch its hostname to easily identify the target swarm worker. 00:01:42.558 container="$(< /etc/hostname) ($agent)" 00:01:42.558 else 00:01:42.558 # Fallback 00:01:42.558 container=$agent 00:01:42.558 fi 00:01:42.558 fi 00:01:42.558 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:42.558 00:01:42.825 [Pipeline] } 00:01:42.841 [Pipeline] // withEnv 00:01:42.850 [Pipeline] setCustomBuildProperty 00:01:42.865 [Pipeline] stage 00:01:42.867 [Pipeline] { (Tests) 00:01:42.884 [Pipeline] sh 00:01:43.161 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:43.432 [Pipeline] sh 00:01:43.709 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:43.983 [Pipeline] timeout 00:01:43.984 Timeout set to expire in 1 hr 0 min 00:01:43.986 [Pipeline] { 00:01:44.002 [Pipeline] sh 00:01:44.342 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:44.910 HEAD is now at d47eb51c9 bdev: fix a race between reset start and complete 00:01:44.922 [Pipeline] sh 00:01:45.202 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:45.475 [Pipeline] sh 00:01:45.757 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:46.031 [Pipeline] sh 00:01:46.312 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:46.571 ++ readlink -f spdk_repo 00:01:46.571 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:46.571 + [[ -n /home/vagrant/spdk_repo ]] 00:01:46.571 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:46.571 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:46.571 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:46.571 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:46.571 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:46.571 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:46.571 + cd /home/vagrant/spdk_repo 00:01:46.571 + source /etc/os-release 00:01:46.571 ++ NAME='Fedora Linux' 00:01:46.571 ++ VERSION='39 (Cloud Edition)' 00:01:46.571 ++ ID=fedora 00:01:46.571 ++ VERSION_ID=39 00:01:46.571 ++ VERSION_CODENAME= 00:01:46.571 ++ PLATFORM_ID=platform:f39 00:01:46.571 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:46.571 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:46.571 ++ LOGO=fedora-logo-icon 00:01:46.571 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:46.571 ++ HOME_URL=https://fedoraproject.org/ 00:01:46.571 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:46.571 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:46.571 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:46.571 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:46.571 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:46.571 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:46.571 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:46.571 ++ SUPPORT_END=2024-11-12 00:01:46.571 ++ VARIANT='Cloud Edition' 00:01:46.571 ++ VARIANT_ID=cloud 00:01:46.571 + uname -a 00:01:46.571 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:46.571 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:46.830 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:46.830 Hugepages 00:01:46.830 node hugesize free / total 00:01:47.088 node0 1048576kB 0 / 0 00:01:47.088 node0 2048kB 0 / 0 00:01:47.088 00:01:47.088 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:47.088 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:47.088 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:47.088 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:47.088 + rm -f /tmp/spdk-ld-path 00:01:47.088 + source autorun-spdk.conf 00:01:47.088 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:47.088 ++ SPDK_TEST_NVMF=1 00:01:47.088 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:47.088 ++ SPDK_TEST_URING=1 00:01:47.088 ++ SPDK_TEST_USDT=1 00:01:47.088 ++ SPDK_RUN_UBSAN=1 00:01:47.088 ++ NET_TYPE=virt 00:01:47.088 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:47.088 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:47.088 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:47.088 ++ RUN_NIGHTLY=1 00:01:47.088 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:47.088 + [[ -n '' ]] 00:01:47.088 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:47.088 + for M in /var/spdk/build-*-manifest.txt 00:01:47.088 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:47.088 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:47.088 + for M in /var/spdk/build-*-manifest.txt 00:01:47.088 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:47.088 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:47.088 + for M in /var/spdk/build-*-manifest.txt 00:01:47.088 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:47.088 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:47.088 ++ uname 00:01:47.088 + [[ Linux == \L\i\n\u\x ]] 00:01:47.088 + sudo dmesg -T 00:01:47.088 + sudo dmesg --clear 00:01:47.088 + dmesg_pid=6001 00:01:47.088 + [[ Fedora Linux == FreeBSD ]] 00:01:47.088 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:47.088 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:47.088 + sudo dmesg -Tw 00:01:47.088 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:47.088 + [[ -x /usr/src/fio-static/fio ]] 00:01:47.088 + export FIO_BIN=/usr/src/fio-static/fio 00:01:47.088 + FIO_BIN=/usr/src/fio-static/fio 00:01:47.088 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:47.088 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:47.088 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:47.088 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:47.088 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:47.088 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:47.088 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:47.088 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:47.088 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:47.347 01:42:57 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:47.347 01:42:57 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:47.347 01:42:57 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:47.347 01:42:57 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:47.347 01:42:57 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:47.347 01:42:57 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:01:47.347 01:42:57 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:01:47.347 01:42:57 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:47.347 01:42:57 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:01:47.347 01:42:57 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:47.347 01:42:57 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:47.347 01:42:57 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:47.347 01:42:57 -- spdk_repo/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:01:47.347 01:42:57 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:47.347 01:42:57 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:47.347 01:42:57 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:47.347 01:42:57 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:47.347 01:42:57 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:47.347 01:42:57 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:47.347 01:42:57 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:47.347 01:42:57 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:47.347 01:42:57 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.347 01:42:57 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.347 01:42:57 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.347 01:42:57 -- paths/export.sh@5 -- $ export PATH 00:01:47.347 01:42:57 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.347 01:42:57 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:47.347 01:42:57 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:47.347 01:42:57 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731980577.XXXXXX 00:01:47.347 01:42:57 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731980577.wSHYIb 00:01:47.347 01:42:57 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:47.347 01:42:57 -- common/autobuild_common.sh@492 -- $ '[' -n v22.11.4 ']' 00:01:47.347 01:42:57 -- common/autobuild_common.sh@493 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:47.347 01:42:57 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:01:47.347 01:42:57 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:47.347 01:42:57 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:47.347 01:42:57 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:47.347 01:42:57 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:47.347 01:42:57 -- common/autotest_common.sh@10 -- $ set +x 00:01:47.347 01:42:57 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:01:47.347 01:42:57 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:47.347 01:42:57 -- pm/common@17 -- $ local monitor 00:01:47.347 01:42:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.347 01:42:57 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.347 01:42:57 -- pm/common@25 -- $ sleep 1 00:01:47.347 01:42:57 -- pm/common@21 -- $ date +%s 00:01:47.347 01:42:57 -- pm/common@21 -- $ date +%s 00:01:47.347 01:42:57 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731980577 00:01:47.347 01:42:57 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731980577 00:01:47.347 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731980577_collect-cpu-load.pm.log 00:01:47.347 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731980577_collect-vmstat.pm.log 00:01:48.281 01:42:58 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:48.281 01:42:58 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:48.281 01:42:58 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:48.281 01:42:58 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:48.281 01:42:58 -- spdk/autobuild.sh@16 -- $ date -u 00:01:48.281 Tue Nov 19 01:42:58 AM UTC 2024 00:01:48.281 01:42:58 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:48.281 v25.01-pre-190-gd47eb51c9 00:01:48.281 01:42:58 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:48.281 01:42:58 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:48.281 01:42:58 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:48.281 01:42:58 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:48.281 01:42:58 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:48.281 01:42:58 -- common/autotest_common.sh@10 -- $ set +x 00:01:48.281 ************************************ 00:01:48.281 START TEST ubsan 00:01:48.281 ************************************ 00:01:48.281 using ubsan 00:01:48.281 01:42:58 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:48.281 00:01:48.281 real 0m0.000s 00:01:48.281 user 0m0.000s 00:01:48.281 sys 0m0.000s 00:01:48.281 01:42:58 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:48.281 01:42:58 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:48.281 ************************************ 00:01:48.281 END TEST ubsan 00:01:48.281 ************************************ 00:01:48.538 01:42:58 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:48.538 01:42:58 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:48.538 01:42:58 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:48.538 01:42:58 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:01:48.538 01:42:58 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:48.538 01:42:58 -- common/autotest_common.sh@10 -- $ set +x 00:01:48.538 ************************************ 00:01:48.538 START TEST build_native_dpdk 00:01:48.538 ************************************ 00:01:48.538 01:42:58 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:01:48.538 caf0f5d395 version: 22.11.4 00:01:48.538 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:48.538 dc9c799c7d vhost: fix missing spinlock unlock 00:01:48.538 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:48.538 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:48.538 01:42:58 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:48.538 01:42:58 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:48.538 01:42:58 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:48.538 01:42:58 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:48.538 01:42:58 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:48.538 01:42:58 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:48.538 01:42:58 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:48.538 01:42:58 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:48.538 01:42:58 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:48.538 01:42:58 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:48.538 01:42:58 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:48.538 01:42:58 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:48.538 01:42:58 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:48.538 01:42:58 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:48.538 01:42:58 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:48.538 01:42:58 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:48.538 01:42:58 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:48.538 01:42:58 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:48.538 01:42:58 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:48.538 01:42:58 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:48.538 01:42:58 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:48.538 01:42:58 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:01:48.538 01:42:58 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:01:48.538 01:42:58 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:48.539 01:42:58 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:48.539 patching file config/rte_config.h 00:01:48.539 Hunk #1 succeeded at 60 (offset 1 line). 00:01:48.539 01:42:58 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:01:48.539 01:42:58 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:48.539 patching file lib/pcapng/rte_pcapng.c 00:01:48.539 Hunk #1 succeeded at 110 (offset -18 lines). 00:01:48.539 01:42:58 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 22.11.4 24.07.0 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:48.539 01:42:58 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:01:48.539 01:42:58 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:01:48.539 01:42:58 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:01:48.539 01:42:58 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:01:48.539 01:42:58 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:48.539 01:42:58 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:53.805 The Meson build system 00:01:53.805 Version: 1.5.0 00:01:53.805 Source dir: /home/vagrant/spdk_repo/dpdk 00:01:53.805 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:01:53.805 Build type: native build 00:01:53.805 Program cat found: YES (/usr/bin/cat) 00:01:53.805 Project name: DPDK 00:01:53.805 Project version: 22.11.4 00:01:53.805 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:53.805 C linker for the host machine: gcc ld.bfd 2.40-14 00:01:53.805 Host machine cpu family: x86_64 00:01:53.805 Host machine cpu: x86_64 00:01:53.805 Message: ## Building in Developer Mode ## 00:01:53.805 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:53.805 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:01:53.805 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:01:53.805 Program objdump found: YES (/usr/bin/objdump) 00:01:53.805 Program python3 found: YES (/usr/bin/python3) 00:01:53.805 Program cat found: YES (/usr/bin/cat) 00:01:53.805 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:53.805 Checking for size of "void *" : 8 00:01:53.805 Checking for size of "void *" : 8 (cached) 00:01:53.805 Library m found: YES 00:01:53.805 Library numa found: YES 00:01:53.805 Has header "numaif.h" : YES 00:01:53.805 Library fdt found: NO 00:01:53.805 Library execinfo found: NO 00:01:53.805 Has header "execinfo.h" : YES 00:01:53.805 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:53.805 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:53.805 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:53.805 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:53.805 Run-time dependency openssl found: YES 3.1.1 00:01:53.805 Run-time dependency libpcap found: YES 1.10.4 00:01:53.805 Has header "pcap.h" with dependency libpcap: YES 00:01:53.805 Compiler for C supports arguments -Wcast-qual: YES 00:01:53.805 Compiler for C supports arguments -Wdeprecated: YES 00:01:53.805 Compiler for C supports arguments -Wformat: YES 00:01:53.805 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:53.805 Compiler for C supports arguments -Wformat-security: NO 00:01:53.805 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:53.805 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:53.805 Compiler for C supports arguments -Wnested-externs: YES 00:01:53.805 Compiler for C supports arguments -Wold-style-definition: YES 00:01:53.805 Compiler for C supports arguments -Wpointer-arith: YES 00:01:53.805 Compiler for C supports arguments -Wsign-compare: YES 00:01:53.805 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:53.805 Compiler for C supports arguments -Wundef: YES 00:01:53.805 Compiler for C supports arguments -Wwrite-strings: YES 00:01:53.805 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:53.805 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:53.805 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:53.805 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:53.805 Compiler for C supports arguments -mavx512f: YES 00:01:53.805 Checking if "AVX512 checking" compiles: YES 00:01:53.805 Fetching value of define "__SSE4_2__" : 1 00:01:53.805 Fetching value of define "__AES__" : 1 00:01:53.805 Fetching value of define "__AVX__" : 1 00:01:53.805 Fetching value of define "__AVX2__" : 1 00:01:53.805 Fetching value of define "__AVX512BW__" : (undefined) 00:01:53.805 Fetching value of define "__AVX512CD__" : (undefined) 00:01:53.805 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:53.805 Fetching value of define "__AVX512F__" : (undefined) 00:01:53.805 Fetching value of define "__AVX512VL__" : (undefined) 00:01:53.805 Fetching value of define "__PCLMUL__" : 1 00:01:53.805 Fetching value of define "__RDRND__" : 1 00:01:53.805 Fetching value of define "__RDSEED__" : 1 00:01:53.805 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:53.805 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:53.805 Message: lib/kvargs: Defining dependency "kvargs" 00:01:53.805 Message: lib/telemetry: Defining dependency "telemetry" 00:01:53.805 Checking for function "getentropy" : YES 00:01:53.805 Message: lib/eal: Defining dependency "eal" 00:01:53.805 Message: lib/ring: Defining dependency "ring" 00:01:53.805 Message: lib/rcu: Defining dependency "rcu" 00:01:53.805 Message: lib/mempool: Defining dependency "mempool" 00:01:53.805 Message: lib/mbuf: Defining dependency "mbuf" 00:01:53.805 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:53.805 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:53.805 Compiler for C supports arguments -mpclmul: YES 00:01:53.805 Compiler for C supports arguments -maes: YES 00:01:53.805 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:53.805 Compiler for C supports arguments -mavx512bw: YES 00:01:53.805 Compiler for C supports arguments -mavx512dq: YES 00:01:53.805 Compiler for C supports arguments -mavx512vl: YES 00:01:53.805 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:53.805 Compiler for C supports arguments -mavx2: YES 00:01:53.805 Compiler for C supports arguments -mavx: YES 00:01:53.805 Message: lib/net: Defining dependency "net" 00:01:53.805 Message: lib/meter: Defining dependency "meter" 00:01:53.805 Message: lib/ethdev: Defining dependency "ethdev" 00:01:53.805 Message: lib/pci: Defining dependency "pci" 00:01:53.805 Message: lib/cmdline: Defining dependency "cmdline" 00:01:53.805 Message: lib/metrics: Defining dependency "metrics" 00:01:53.805 Message: lib/hash: Defining dependency "hash" 00:01:53.805 Message: lib/timer: Defining dependency "timer" 00:01:53.805 Fetching value of define "__AVX2__" : 1 (cached) 00:01:53.805 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:53.805 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:53.805 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:53.805 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:53.805 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:53.805 Message: lib/acl: Defining dependency "acl" 00:01:53.805 Message: lib/bbdev: Defining dependency "bbdev" 00:01:53.805 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:53.805 Run-time dependency libelf found: YES 0.191 00:01:53.805 Message: lib/bpf: Defining dependency "bpf" 00:01:53.805 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:53.805 Message: lib/compressdev: Defining dependency "compressdev" 00:01:53.805 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:53.805 Message: lib/distributor: Defining dependency "distributor" 00:01:53.805 Message: lib/efd: Defining dependency "efd" 00:01:53.805 Message: lib/eventdev: Defining dependency "eventdev" 00:01:53.805 Message: lib/gpudev: Defining dependency "gpudev" 00:01:53.805 Message: lib/gro: Defining dependency "gro" 00:01:53.805 Message: lib/gso: Defining dependency "gso" 00:01:53.805 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:53.805 Message: lib/jobstats: Defining dependency "jobstats" 00:01:53.805 Message: lib/latencystats: Defining dependency "latencystats" 00:01:53.805 Message: lib/lpm: Defining dependency "lpm" 00:01:53.805 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:53.805 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:53.805 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:53.805 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:53.805 Message: lib/member: Defining dependency "member" 00:01:53.805 Message: lib/pcapng: Defining dependency "pcapng" 00:01:53.805 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:53.805 Message: lib/power: Defining dependency "power" 00:01:53.805 Message: lib/rawdev: Defining dependency "rawdev" 00:01:53.805 Message: lib/regexdev: Defining dependency "regexdev" 00:01:53.805 Message: lib/dmadev: Defining dependency "dmadev" 00:01:53.805 Message: lib/rib: Defining dependency "rib" 00:01:53.805 Message: lib/reorder: Defining dependency "reorder" 00:01:53.805 Message: lib/sched: Defining dependency "sched" 00:01:53.805 Message: lib/security: Defining dependency "security" 00:01:53.805 Message: lib/stack: Defining dependency "stack" 00:01:53.805 Has header "linux/userfaultfd.h" : YES 00:01:53.805 Message: lib/vhost: Defining dependency "vhost" 00:01:53.805 Message: lib/ipsec: Defining dependency "ipsec" 00:01:53.805 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:53.805 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:53.805 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:53.805 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:53.805 Message: lib/fib: Defining dependency "fib" 00:01:53.805 Message: lib/port: Defining dependency "port" 00:01:53.805 Message: lib/pdump: Defining dependency "pdump" 00:01:53.806 Message: lib/table: Defining dependency "table" 00:01:53.806 Message: lib/pipeline: Defining dependency "pipeline" 00:01:53.806 Message: lib/graph: Defining dependency "graph" 00:01:53.806 Message: lib/node: Defining dependency "node" 00:01:53.806 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:53.806 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:53.806 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:53.806 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:53.806 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:53.806 Compiler for C supports arguments -Wno-unused-value: YES 00:01:53.806 Compiler for C supports arguments -Wno-format: YES 00:01:53.806 Compiler for C supports arguments -Wno-format-security: YES 00:01:53.806 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:55.184 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:55.184 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:55.184 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:55.184 Fetching value of define "__AVX2__" : 1 (cached) 00:01:55.184 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:55.184 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:55.184 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:55.184 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:55.184 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:55.184 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:55.184 Configuring doxy-api.conf using configuration 00:01:55.184 Program sphinx-build found: NO 00:01:55.184 Configuring rte_build_config.h using configuration 00:01:55.184 Message: 00:01:55.184 ================= 00:01:55.184 Applications Enabled 00:01:55.184 ================= 00:01:55.184 00:01:55.184 apps: 00:01:55.184 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:55.184 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:55.184 test-security-perf, 00:01:55.184 00:01:55.184 Message: 00:01:55.184 ================= 00:01:55.184 Libraries Enabled 00:01:55.184 ================= 00:01:55.184 00:01:55.184 libs: 00:01:55.184 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:55.184 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:55.184 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:55.184 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:55.184 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:55.184 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:55.184 table, pipeline, graph, node, 00:01:55.184 00:01:55.184 Message: 00:01:55.184 =============== 00:01:55.184 Drivers Enabled 00:01:55.184 =============== 00:01:55.184 00:01:55.184 common: 00:01:55.184 00:01:55.184 bus: 00:01:55.184 pci, vdev, 00:01:55.184 mempool: 00:01:55.184 ring, 00:01:55.184 dma: 00:01:55.184 00:01:55.184 net: 00:01:55.184 i40e, 00:01:55.184 raw: 00:01:55.184 00:01:55.184 crypto: 00:01:55.184 00:01:55.184 compress: 00:01:55.184 00:01:55.184 regex: 00:01:55.184 00:01:55.184 vdpa: 00:01:55.184 00:01:55.184 event: 00:01:55.184 00:01:55.184 baseband: 00:01:55.184 00:01:55.184 gpu: 00:01:55.184 00:01:55.184 00:01:55.184 Message: 00:01:55.184 ================= 00:01:55.184 Content Skipped 00:01:55.184 ================= 00:01:55.184 00:01:55.184 apps: 00:01:55.184 00:01:55.184 libs: 00:01:55.184 kni: explicitly disabled via build config (deprecated lib) 00:01:55.184 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:55.184 00:01:55.184 drivers: 00:01:55.184 common/cpt: not in enabled drivers build config 00:01:55.184 common/dpaax: not in enabled drivers build config 00:01:55.184 common/iavf: not in enabled drivers build config 00:01:55.184 common/idpf: not in enabled drivers build config 00:01:55.184 common/mvep: not in enabled drivers build config 00:01:55.184 common/octeontx: not in enabled drivers build config 00:01:55.184 bus/auxiliary: not in enabled drivers build config 00:01:55.184 bus/dpaa: not in enabled drivers build config 00:01:55.184 bus/fslmc: not in enabled drivers build config 00:01:55.184 bus/ifpga: not in enabled drivers build config 00:01:55.184 bus/vmbus: not in enabled drivers build config 00:01:55.184 common/cnxk: not in enabled drivers build config 00:01:55.184 common/mlx5: not in enabled drivers build config 00:01:55.184 common/qat: not in enabled drivers build config 00:01:55.184 common/sfc_efx: not in enabled drivers build config 00:01:55.184 mempool/bucket: not in enabled drivers build config 00:01:55.184 mempool/cnxk: not in enabled drivers build config 00:01:55.184 mempool/dpaa: not in enabled drivers build config 00:01:55.184 mempool/dpaa2: not in enabled drivers build config 00:01:55.184 mempool/octeontx: not in enabled drivers build config 00:01:55.184 mempool/stack: not in enabled drivers build config 00:01:55.184 dma/cnxk: not in enabled drivers build config 00:01:55.184 dma/dpaa: not in enabled drivers build config 00:01:55.184 dma/dpaa2: not in enabled drivers build config 00:01:55.184 dma/hisilicon: not in enabled drivers build config 00:01:55.184 dma/idxd: not in enabled drivers build config 00:01:55.184 dma/ioat: not in enabled drivers build config 00:01:55.184 dma/skeleton: not in enabled drivers build config 00:01:55.184 net/af_packet: not in enabled drivers build config 00:01:55.184 net/af_xdp: not in enabled drivers build config 00:01:55.184 net/ark: not in enabled drivers build config 00:01:55.184 net/atlantic: not in enabled drivers build config 00:01:55.184 net/avp: not in enabled drivers build config 00:01:55.184 net/axgbe: not in enabled drivers build config 00:01:55.184 net/bnx2x: not in enabled drivers build config 00:01:55.184 net/bnxt: not in enabled drivers build config 00:01:55.184 net/bonding: not in enabled drivers build config 00:01:55.184 net/cnxk: not in enabled drivers build config 00:01:55.184 net/cxgbe: not in enabled drivers build config 00:01:55.184 net/dpaa: not in enabled drivers build config 00:01:55.184 net/dpaa2: not in enabled drivers build config 00:01:55.184 net/e1000: not in enabled drivers build config 00:01:55.184 net/ena: not in enabled drivers build config 00:01:55.184 net/enetc: not in enabled drivers build config 00:01:55.184 net/enetfec: not in enabled drivers build config 00:01:55.184 net/enic: not in enabled drivers build config 00:01:55.184 net/failsafe: not in enabled drivers build config 00:01:55.184 net/fm10k: not in enabled drivers build config 00:01:55.184 net/gve: not in enabled drivers build config 00:01:55.184 net/hinic: not in enabled drivers build config 00:01:55.184 net/hns3: not in enabled drivers build config 00:01:55.184 net/iavf: not in enabled drivers build config 00:01:55.184 net/ice: not in enabled drivers build config 00:01:55.184 net/idpf: not in enabled drivers build config 00:01:55.184 net/igc: not in enabled drivers build config 00:01:55.184 net/ionic: not in enabled drivers build config 00:01:55.184 net/ipn3ke: not in enabled drivers build config 00:01:55.184 net/ixgbe: not in enabled drivers build config 00:01:55.184 net/kni: not in enabled drivers build config 00:01:55.184 net/liquidio: not in enabled drivers build config 00:01:55.184 net/mana: not in enabled drivers build config 00:01:55.184 net/memif: not in enabled drivers build config 00:01:55.184 net/mlx4: not in enabled drivers build config 00:01:55.184 net/mlx5: not in enabled drivers build config 00:01:55.184 net/mvneta: not in enabled drivers build config 00:01:55.184 net/mvpp2: not in enabled drivers build config 00:01:55.184 net/netvsc: not in enabled drivers build config 00:01:55.184 net/nfb: not in enabled drivers build config 00:01:55.184 net/nfp: not in enabled drivers build config 00:01:55.184 net/ngbe: not in enabled drivers build config 00:01:55.184 net/null: not in enabled drivers build config 00:01:55.184 net/octeontx: not in enabled drivers build config 00:01:55.184 net/octeon_ep: not in enabled drivers build config 00:01:55.184 net/pcap: not in enabled drivers build config 00:01:55.184 net/pfe: not in enabled drivers build config 00:01:55.184 net/qede: not in enabled drivers build config 00:01:55.184 net/ring: not in enabled drivers build config 00:01:55.184 net/sfc: not in enabled drivers build config 00:01:55.184 net/softnic: not in enabled drivers build config 00:01:55.184 net/tap: not in enabled drivers build config 00:01:55.184 net/thunderx: not in enabled drivers build config 00:01:55.184 net/txgbe: not in enabled drivers build config 00:01:55.184 net/vdev_netvsc: not in enabled drivers build config 00:01:55.184 net/vhost: not in enabled drivers build config 00:01:55.184 net/virtio: not in enabled drivers build config 00:01:55.184 net/vmxnet3: not in enabled drivers build config 00:01:55.184 raw/cnxk_bphy: not in enabled drivers build config 00:01:55.184 raw/cnxk_gpio: not in enabled drivers build config 00:01:55.184 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:55.184 raw/ifpga: not in enabled drivers build config 00:01:55.184 raw/ntb: not in enabled drivers build config 00:01:55.185 raw/skeleton: not in enabled drivers build config 00:01:55.185 crypto/armv8: not in enabled drivers build config 00:01:55.185 crypto/bcmfs: not in enabled drivers build config 00:01:55.185 crypto/caam_jr: not in enabled drivers build config 00:01:55.185 crypto/ccp: not in enabled drivers build config 00:01:55.185 crypto/cnxk: not in enabled drivers build config 00:01:55.185 crypto/dpaa_sec: not in enabled drivers build config 00:01:55.185 crypto/dpaa2_sec: not in enabled drivers build config 00:01:55.185 crypto/ipsec_mb: not in enabled drivers build config 00:01:55.185 crypto/mlx5: not in enabled drivers build config 00:01:55.185 crypto/mvsam: not in enabled drivers build config 00:01:55.185 crypto/nitrox: not in enabled drivers build config 00:01:55.185 crypto/null: not in enabled drivers build config 00:01:55.185 crypto/octeontx: not in enabled drivers build config 00:01:55.185 crypto/openssl: not in enabled drivers build config 00:01:55.185 crypto/scheduler: not in enabled drivers build config 00:01:55.185 crypto/uadk: not in enabled drivers build config 00:01:55.185 crypto/virtio: not in enabled drivers build config 00:01:55.185 compress/isal: not in enabled drivers build config 00:01:55.185 compress/mlx5: not in enabled drivers build config 00:01:55.185 compress/octeontx: not in enabled drivers build config 00:01:55.185 compress/zlib: not in enabled drivers build config 00:01:55.185 regex/mlx5: not in enabled drivers build config 00:01:55.185 regex/cn9k: not in enabled drivers build config 00:01:55.185 vdpa/ifc: not in enabled drivers build config 00:01:55.185 vdpa/mlx5: not in enabled drivers build config 00:01:55.185 vdpa/sfc: not in enabled drivers build config 00:01:55.185 event/cnxk: not in enabled drivers build config 00:01:55.185 event/dlb2: not in enabled drivers build config 00:01:55.185 event/dpaa: not in enabled drivers build config 00:01:55.185 event/dpaa2: not in enabled drivers build config 00:01:55.185 event/dsw: not in enabled drivers build config 00:01:55.185 event/opdl: not in enabled drivers build config 00:01:55.185 event/skeleton: not in enabled drivers build config 00:01:55.185 event/sw: not in enabled drivers build config 00:01:55.185 event/octeontx: not in enabled drivers build config 00:01:55.185 baseband/acc: not in enabled drivers build config 00:01:55.185 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:55.185 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:55.185 baseband/la12xx: not in enabled drivers build config 00:01:55.185 baseband/null: not in enabled drivers build config 00:01:55.185 baseband/turbo_sw: not in enabled drivers build config 00:01:55.185 gpu/cuda: not in enabled drivers build config 00:01:55.185 00:01:55.185 00:01:55.185 Build targets in project: 314 00:01:55.185 00:01:55.185 DPDK 22.11.4 00:01:55.185 00:01:55.185 User defined options 00:01:55.185 libdir : lib 00:01:55.185 prefix : /home/vagrant/spdk_repo/dpdk/build 00:01:55.185 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:55.185 c_link_args : 00:01:55.185 enable_docs : false 00:01:55.185 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:55.185 enable_kmods : false 00:01:55.185 machine : native 00:01:55.185 tests : false 00:01:55.185 00:01:55.185 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:55.185 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:55.185 01:43:05 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:01:55.185 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:01:55.185 [1/743] Generating lib/rte_kvargs_def with a custom command 00:01:55.185 [2/743] Generating lib/rte_kvargs_mingw with a custom command 00:01:55.185 [3/743] Generating lib/rte_telemetry_def with a custom command 00:01:55.185 [4/743] Generating lib/rte_telemetry_mingw with a custom command 00:01:55.185 [5/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:55.185 [6/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:55.185 [7/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:55.185 [8/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:55.185 [9/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:55.185 [10/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:55.185 [11/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:55.185 [12/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:55.185 [13/743] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:55.185 [14/743] Linking static target lib/librte_kvargs.a 00:01:55.185 [15/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:55.444 [16/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:55.444 [17/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:55.444 [18/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:55.444 [19/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:55.444 [20/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:55.444 [21/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:55.444 [22/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:55.444 [23/743] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.444 [24/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:55.444 [25/743] Linking target lib/librte_kvargs.so.23.0 00:01:55.703 [26/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:55.703 [27/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:55.703 [28/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:55.703 [29/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:55.703 [30/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:55.703 [31/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:55.703 [32/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:55.703 [33/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:55.703 [34/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:55.703 [35/743] Linking static target lib/librte_telemetry.a 00:01:55.703 [36/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:55.703 [37/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:55.703 [38/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:55.961 [39/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:55.961 [40/743] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:55.961 [41/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:55.961 [42/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:55.961 [43/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:55.961 [44/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:55.961 [45/743] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.220 [46/743] Linking target lib/librte_telemetry.so.23.0 00:01:56.220 [47/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:56.220 [48/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:56.220 [49/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:56.220 [50/743] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:56.220 [51/743] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:56.220 [52/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:56.220 [53/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:56.220 [54/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:56.220 [55/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:56.220 [56/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:56.220 [57/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:56.220 [58/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:56.220 [59/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:56.220 [60/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:56.478 [61/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:56.478 [62/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:56.478 [63/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:56.478 [64/743] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:56.478 [65/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:56.478 [66/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:56.478 [67/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:56.478 [68/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:56.478 [69/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:56.478 [70/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:56.478 [71/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:56.478 [72/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:56.478 [73/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:56.736 [74/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:56.736 [75/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:56.736 [76/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:56.736 [77/743] Generating lib/rte_eal_mingw with a custom command 00:01:56.736 [78/743] Generating lib/rte_eal_def with a custom command 00:01:56.736 [79/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:56.736 [80/743] Generating lib/rte_ring_def with a custom command 00:01:56.736 [81/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:56.736 [82/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:56.736 [83/743] Generating lib/rte_ring_mingw with a custom command 00:01:56.736 [84/743] Generating lib/rte_rcu_def with a custom command 00:01:56.736 [85/743] Generating lib/rte_rcu_mingw with a custom command 00:01:56.736 [86/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:56.736 [87/743] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:56.736 [88/743] Linking static target lib/librte_ring.a 00:01:56.995 [89/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:56.995 [90/743] Generating lib/rte_mempool_def with a custom command 00:01:56.995 [91/743] Generating lib/rte_mempool_mingw with a custom command 00:01:56.995 [92/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:56.995 [93/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:56.995 [94/743] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.253 [95/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:57.253 [96/743] Linking static target lib/librte_eal.a 00:01:57.542 [97/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:57.542 [98/743] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:57.542 [99/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:57.542 [100/743] Generating lib/rte_mbuf_def with a custom command 00:01:57.542 [101/743] Generating lib/rte_mbuf_mingw with a custom command 00:01:57.542 [102/743] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:57.542 [103/743] Linking static target lib/librte_rcu.a 00:01:57.542 [104/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:57.542 [105/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:57.805 [106/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:57.805 [107/743] Linking static target lib/librte_mempool.a 00:01:57.805 [108/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:57.805 [109/743] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.063 [110/743] Generating lib/rte_net_def with a custom command 00:01:58.063 [111/743] Generating lib/rte_net_mingw with a custom command 00:01:58.063 [112/743] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:58.063 [113/743] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:58.063 [114/743] Generating lib/rte_meter_def with a custom command 00:01:58.063 [115/743] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:58.063 [116/743] Linking static target lib/librte_meter.a 00:01:58.063 [117/743] Generating lib/rte_meter_mingw with a custom command 00:01:58.063 [118/743] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:58.322 [119/743] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:58.322 [120/743] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:58.322 [121/743] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.322 [122/743] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:58.580 [123/743] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:58.580 [124/743] Linking static target lib/librte_net.a 00:01:58.580 [125/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:58.580 [126/743] Linking static target lib/librte_mbuf.a 00:01:58.580 [127/743] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.838 [128/743] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.838 [129/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:58.838 [130/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:59.096 [131/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:59.096 [132/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:59.096 [133/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:59.096 [134/743] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.353 [135/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:59.612 [136/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:59.612 [137/743] Generating lib/rte_ethdev_def with a custom command 00:01:59.612 [138/743] Generating lib/rte_ethdev_mingw with a custom command 00:01:59.612 [139/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:59.870 [140/743] Generating lib/rte_pci_def with a custom command 00:01:59.870 [141/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:59.870 [142/743] Generating lib/rte_pci_mingw with a custom command 00:01:59.870 [143/743] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:59.870 [144/743] Linking static target lib/librte_pci.a 00:01:59.870 [145/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:59.870 [146/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:59.870 [147/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:59.870 [148/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:59.871 [149/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:59.871 [150/743] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.129 [151/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:00.129 [152/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:00.129 [153/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:00.129 [154/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:00.129 [155/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:00.129 [156/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:00.129 [157/743] Generating lib/rte_cmdline_def with a custom command 00:02:00.129 [158/743] Generating lib/rte_cmdline_mingw with a custom command 00:02:00.129 [159/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:00.129 [160/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:00.129 [161/743] Generating lib/rte_metrics_def with a custom command 00:02:00.129 [162/743] Generating lib/rte_metrics_mingw with a custom command 00:02:00.388 [163/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:00.388 [164/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:00.388 [165/743] Generating lib/rte_hash_def with a custom command 00:02:00.388 [166/743] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:00.388 [167/743] Generating lib/rte_hash_mingw with a custom command 00:02:00.388 [168/743] Generating lib/rte_timer_def with a custom command 00:02:00.388 [169/743] Generating lib/rte_timer_mingw with a custom command 00:02:00.388 [170/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:00.388 [171/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:00.647 [172/743] Linking static target lib/librte_cmdline.a 00:02:00.647 [173/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:00.905 [174/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:00.905 [175/743] Linking static target lib/librte_metrics.a 00:02:00.905 [176/743] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:00.905 [177/743] Linking static target lib/librte_timer.a 00:02:01.163 [178/743] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.163 [179/743] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.163 [180/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:01.421 [181/743] Linking static target lib/librte_ethdev.a 00:02:01.421 [182/743] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:01.421 [183/743] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.421 [184/743] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:01.988 [185/743] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:01.988 [186/743] Generating lib/rte_acl_def with a custom command 00:02:01.988 [187/743] Generating lib/rte_acl_mingw with a custom command 00:02:01.988 [188/743] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:01.988 [189/743] Generating lib/rte_bbdev_def with a custom command 00:02:01.988 [190/743] Generating lib/rte_bbdev_mingw with a custom command 00:02:01.988 [191/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:02.246 [192/743] Generating lib/rte_bitratestats_def with a custom command 00:02:02.246 [193/743] Generating lib/rte_bitratestats_mingw with a custom command 00:02:02.505 [194/743] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:02.764 [195/743] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:02.764 [196/743] Linking static target lib/librte_bitratestats.a 00:02:02.764 [197/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:03.021 [198/743] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.021 [199/743] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:03.021 [200/743] Linking static target lib/librte_bbdev.a 00:02:03.279 [201/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:03.279 [202/743] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:03.279 [203/743] Linking static target lib/librte_hash.a 00:02:03.537 [204/743] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:03.537 [205/743] Linking static target lib/acl/libavx512_tmp.a 00:02:03.537 [206/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:03.537 [207/743] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.537 [208/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:03.796 [209/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:04.055 [210/743] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.055 [211/743] Generating lib/rte_bpf_def with a custom command 00:02:04.055 [212/743] Generating lib/rte_bpf_mingw with a custom command 00:02:04.055 [213/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:04.055 [214/743] Generating lib/rte_cfgfile_def with a custom command 00:02:04.055 [215/743] Generating lib/rte_cfgfile_mingw with a custom command 00:02:04.313 [216/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:04.313 [217/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:04.313 [218/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:04.313 [219/743] Linking static target lib/librte_acl.a 00:02:04.313 [220/743] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:04.313 [221/743] Linking static target lib/librte_cfgfile.a 00:02:04.313 [222/743] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.572 [223/743] Linking target lib/librte_eal.so.23.0 00:02:04.572 [224/743] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.572 [225/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:04.572 [226/743] Generating lib/rte_compressdev_def with a custom command 00:02:04.572 [227/743] Generating lib/rte_compressdev_mingw with a custom command 00:02:04.572 [228/743] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:04.572 [229/743] Linking target lib/librte_ring.so.23.0 00:02:04.572 [230/743] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.831 [231/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:04.831 [232/743] Linking target lib/librte_meter.so.23.0 00:02:04.831 [233/743] Linking target lib/librte_pci.so.23.0 00:02:04.831 [234/743] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:04.831 [235/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:04.831 [236/743] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:04.831 [237/743] Linking target lib/librte_rcu.so.23.0 00:02:04.831 [238/743] Linking target lib/librte_mempool.so.23.0 00:02:04.831 [239/743] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:04.831 [240/743] Linking target lib/librte_timer.so.23.0 00:02:04.831 [241/743] Linking target lib/librte_acl.so.23.0 00:02:04.831 [242/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:04.831 [243/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:05.090 [244/743] Linking static target lib/librte_bpf.a 00:02:05.090 [245/743] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:05.090 [246/743] Linking target lib/librte_cfgfile.so.23.0 00:02:05.090 [247/743] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:05.090 [248/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:05.090 [249/743] Linking static target lib/librte_compressdev.a 00:02:05.090 [250/743] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:05.090 [251/743] Generating lib/rte_cryptodev_def with a custom command 00:02:05.090 [252/743] Linking target lib/librte_mbuf.so.23.0 00:02:05.090 [253/743] Generating lib/rte_cryptodev_mingw with a custom command 00:02:05.090 [254/743] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:05.090 [255/743] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:05.090 [256/743] Linking target lib/librte_net.so.23.0 00:02:05.349 [257/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:05.349 [258/743] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.349 [259/743] Linking target lib/librte_bbdev.so.23.0 00:02:05.349 [260/743] Generating lib/rte_distributor_def with a custom command 00:02:05.349 [261/743] Generating lib/rte_distributor_mingw with a custom command 00:02:05.349 [262/743] Generating lib/rte_efd_def with a custom command 00:02:05.349 [263/743] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:05.349 [264/743] Generating lib/rte_efd_mingw with a custom command 00:02:05.349 [265/743] Linking target lib/librte_cmdline.so.23.0 00:02:05.349 [266/743] Linking target lib/librte_hash.so.23.0 00:02:05.349 [267/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:05.607 [268/743] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:05.607 [269/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:05.864 [270/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:05.864 [271/743] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.864 [272/743] Linking target lib/librte_compressdev.so.23.0 00:02:06.123 [273/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:06.123 [274/743] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.123 [275/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:06.123 [276/743] Linking static target lib/librte_distributor.a 00:02:06.123 [277/743] Linking target lib/librte_ethdev.so.23.0 00:02:06.123 [278/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:06.123 [279/743] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:06.382 [280/743] Linking target lib/librte_metrics.so.23.0 00:02:06.382 [281/743] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.382 [282/743] Linking target lib/librte_bpf.so.23.0 00:02:06.382 [283/743] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:06.382 [284/743] Linking target lib/librte_bitratestats.so.23.0 00:02:06.382 [285/743] Linking target lib/librte_distributor.so.23.0 00:02:06.382 [286/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:06.382 [287/743] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:06.641 [288/743] Generating lib/rte_eventdev_def with a custom command 00:02:06.641 [289/743] Generating lib/rte_eventdev_mingw with a custom command 00:02:06.641 [290/743] Generating lib/rte_gpudev_def with a custom command 00:02:06.641 [291/743] Generating lib/rte_gpudev_mingw with a custom command 00:02:06.899 [292/743] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:06.899 [293/743] Linking static target lib/librte_efd.a 00:02:07.158 [294/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:07.158 [295/743] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.158 [296/743] Linking target lib/librte_efd.so.23.0 00:02:07.158 [297/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:07.418 [298/743] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:07.418 [299/743] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:07.418 [300/743] Generating lib/rte_gro_def with a custom command 00:02:07.418 [301/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:07.418 [302/743] Generating lib/rte_gro_mingw with a custom command 00:02:07.418 [303/743] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:07.418 [304/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:07.418 [305/743] Linking static target lib/librte_gpudev.a 00:02:07.418 [306/743] Linking static target lib/librte_cryptodev.a 00:02:07.676 [307/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:07.935 [308/743] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:07.935 [309/743] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:07.935 [310/743] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:08.193 [311/743] Generating lib/rte_gso_def with a custom command 00:02:08.193 [312/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:08.193 [313/743] Generating lib/rte_gso_mingw with a custom command 00:02:08.193 [314/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:08.193 [315/743] Linking static target lib/librte_gro.a 00:02:08.193 [316/743] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.193 [317/743] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:08.193 [318/743] Linking target lib/librte_gpudev.so.23.0 00:02:08.451 [319/743] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.451 [320/743] Linking target lib/librte_gro.so.23.0 00:02:08.451 [321/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:08.451 [322/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:08.451 [323/743] Generating lib/rte_ip_frag_def with a custom command 00:02:08.709 [324/743] Generating lib/rte_ip_frag_mingw with a custom command 00:02:08.709 [325/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:08.709 [326/743] Linking static target lib/librte_eventdev.a 00:02:08.709 [327/743] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:08.709 [328/743] Linking static target lib/librte_gso.a 00:02:08.709 [329/743] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:08.709 [330/743] Linking static target lib/librte_jobstats.a 00:02:08.709 [331/743] Generating lib/rte_jobstats_def with a custom command 00:02:08.709 [332/743] Generating lib/rte_jobstats_mingw with a custom command 00:02:08.967 [333/743] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.967 [334/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:08.967 [335/743] Linking target lib/librte_gso.so.23.0 00:02:08.967 [336/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:08.967 [337/743] Generating lib/rte_latencystats_mingw with a custom command 00:02:08.967 [338/743] Generating lib/rte_latencystats_def with a custom command 00:02:08.967 [339/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:09.224 [340/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:09.224 [341/743] Generating lib/rte_lpm_def with a custom command 00:02:09.224 [342/743] Generating lib/rte_lpm_mingw with a custom command 00:02:09.224 [343/743] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.224 [344/743] Linking target lib/librte_jobstats.so.23.0 00:02:09.224 [345/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:09.482 [346/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:09.482 [347/743] Linking static target lib/librte_ip_frag.a 00:02:09.741 [348/743] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.741 [349/743] Linking target lib/librte_ip_frag.so.23.0 00:02:09.741 [350/743] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.741 [351/743] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:09.741 [352/743] Linking static target lib/librte_latencystats.a 00:02:09.741 [353/743] Linking target lib/librte_cryptodev.so.23.0 00:02:09.741 [354/743] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:09.741 [355/743] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:09.741 [356/743] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:09.998 [357/743] Generating lib/rte_member_def with a custom command 00:02:09.998 [358/743] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:09.998 [359/743] Generating lib/rte_member_mingw with a custom command 00:02:09.998 [360/743] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:09.998 [361/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:09.998 [362/743] Generating lib/rte_pcapng_def with a custom command 00:02:09.998 [363/743] Generating lib/rte_pcapng_mingw with a custom command 00:02:09.998 [364/743] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.998 [365/743] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:09.998 [366/743] Linking target lib/librte_latencystats.so.23.0 00:02:09.998 [367/743] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:09.998 [368/743] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:10.257 [369/743] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:10.515 [370/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:10.515 [371/743] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:10.515 [372/743] Linking static target lib/librte_lpm.a 00:02:10.515 [373/743] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:10.515 [374/743] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:10.515 [375/743] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.515 [376/743] Generating lib/rte_power_def with a custom command 00:02:10.515 [377/743] Generating lib/rte_power_mingw with a custom command 00:02:10.515 [378/743] Linking target lib/librte_eventdev.so.23.0 00:02:10.773 [379/743] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:10.773 [380/743] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:10.773 [381/743] Generating lib/rte_rawdev_def with a custom command 00:02:10.773 [382/743] Generating lib/rte_rawdev_mingw with a custom command 00:02:10.773 [383/743] Generating lib/rte_regexdev_def with a custom command 00:02:10.773 [384/743] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.773 [385/743] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:10.773 [386/743] Generating lib/rte_regexdev_mingw with a custom command 00:02:10.773 [387/743] Linking target lib/librte_lpm.so.23.0 00:02:10.773 [388/743] Generating lib/rte_dmadev_def with a custom command 00:02:11.031 [389/743] Generating lib/rte_dmadev_mingw with a custom command 00:02:11.031 [390/743] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:11.031 [391/743] Linking static target lib/librte_pcapng.a 00:02:11.031 [392/743] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:11.031 [393/743] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:11.031 [394/743] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:11.031 [395/743] Generating lib/rte_rib_def with a custom command 00:02:11.031 [396/743] Linking static target lib/librte_rawdev.a 00:02:11.031 [397/743] Generating lib/rte_rib_mingw with a custom command 00:02:11.031 [398/743] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:11.289 [399/743] Generating lib/rte_reorder_def with a custom command 00:02:11.289 [400/743] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.289 [401/743] Generating lib/rte_reorder_mingw with a custom command 00:02:11.289 [402/743] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:11.289 [403/743] Linking static target lib/librte_dmadev.a 00:02:11.289 [404/743] Linking target lib/librte_pcapng.so.23.0 00:02:11.289 [405/743] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:11.289 [406/743] Linking static target lib/librte_power.a 00:02:11.289 [407/743] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:11.547 [408/743] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:11.547 [409/743] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.547 [410/743] Linking target lib/librte_rawdev.so.23.0 00:02:11.547 [411/743] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:11.547 [412/743] Linking static target lib/librte_regexdev.a 00:02:11.547 [413/743] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:11.547 [414/743] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:11.547 [415/743] Generating lib/rte_sched_def with a custom command 00:02:11.809 [416/743] Generating lib/rte_sched_mingw with a custom command 00:02:11.809 [417/743] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:11.809 [418/743] Linking static target lib/librte_member.a 00:02:11.809 [419/743] Generating lib/rte_security_def with a custom command 00:02:11.809 [420/743] Generating lib/rte_security_mingw with a custom command 00:02:11.809 [421/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:11.809 [422/743] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.809 [423/743] Linking target lib/librte_dmadev.so.23.0 00:02:11.809 [424/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:11.809 [425/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:11.809 [426/743] Generating lib/rte_stack_def with a custom command 00:02:12.095 [427/743] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:12.095 [428/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:12.095 [429/743] Generating lib/rte_stack_mingw with a custom command 00:02:12.095 [430/743] Linking static target lib/librte_reorder.a 00:02:12.095 [431/743] Linking static target lib/librte_stack.a 00:02:12.095 [432/743] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:12.095 [433/743] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.095 [434/743] Linking target lib/librte_member.so.23.0 00:02:12.095 [435/743] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:12.095 [436/743] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.095 [437/743] Linking target lib/librte_stack.so.23.0 00:02:12.364 [438/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:12.364 [439/743] Linking static target lib/librte_rib.a 00:02:12.364 [440/743] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.364 [441/743] Linking target lib/librte_reorder.so.23.0 00:02:12.364 [442/743] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.364 [443/743] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.364 [444/743] Linking target lib/librte_regexdev.so.23.0 00:02:12.364 [445/743] Linking target lib/librte_power.so.23.0 00:02:12.621 [446/743] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:12.621 [447/743] Linking static target lib/librte_security.a 00:02:12.621 [448/743] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.621 [449/743] Linking target lib/librte_rib.so.23.0 00:02:12.879 [450/743] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:12.879 [451/743] Generating lib/rte_vhost_def with a custom command 00:02:12.879 [452/743] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:12.879 [453/743] Generating lib/rte_vhost_mingw with a custom command 00:02:12.879 [454/743] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:12.879 [455/743] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.879 [456/743] Linking target lib/librte_security.so.23.0 00:02:13.136 [457/743] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:13.136 [458/743] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:13.136 [459/743] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:13.394 [460/743] Linking static target lib/librte_sched.a 00:02:13.651 [461/743] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.651 [462/743] Linking target lib/librte_sched.so.23.0 00:02:13.651 [463/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:13.651 [464/743] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:13.651 [465/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:13.651 [466/743] Generating lib/rte_ipsec_def with a custom command 00:02:13.909 [467/743] Generating lib/rte_ipsec_mingw with a custom command 00:02:13.909 [468/743] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:13.909 [469/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:13.909 [470/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:13.909 [471/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:14.474 [472/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:14.474 [473/743] Generating lib/rte_fib_def with a custom command 00:02:14.474 [474/743] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:14.474 [475/743] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:14.474 [476/743] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:14.474 [477/743] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:14.474 [478/743] Generating lib/rte_fib_mingw with a custom command 00:02:14.732 [479/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:14.732 [480/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:14.732 [481/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:14.732 [482/743] Linking static target lib/librte_ipsec.a 00:02:14.989 [483/743] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.247 [484/743] Linking target lib/librte_ipsec.so.23.0 00:02:15.247 [485/743] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:15.247 [486/743] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:15.247 [487/743] Linking static target lib/librte_fib.a 00:02:15.504 [488/743] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:15.504 [489/743] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:15.504 [490/743] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:15.504 [491/743] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:15.762 [492/743] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.762 [493/743] Linking target lib/librte_fib.so.23.0 00:02:15.762 [494/743] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:16.327 [495/743] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:16.327 [496/743] Generating lib/rte_port_def with a custom command 00:02:16.327 [497/743] Generating lib/rte_port_mingw with a custom command 00:02:16.584 [498/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:16.584 [499/743] Generating lib/rte_pdump_def with a custom command 00:02:16.584 [500/743] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:16.584 [501/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:16.584 [502/743] Generating lib/rte_pdump_mingw with a custom command 00:02:16.584 [503/743] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:16.842 [504/743] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:16.842 [505/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:16.842 [506/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:16.842 [507/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:16.842 [508/743] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:16.842 [509/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:16.842 [510/743] Linking static target lib/librte_port.a 00:02:17.406 [511/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:17.406 [512/743] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.406 [513/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:17.406 [514/743] Linking target lib/librte_port.so.23.0 00:02:17.406 [515/743] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:17.664 [516/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:17.664 [517/743] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:17.664 [518/743] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:17.664 [519/743] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:17.664 [520/743] Linking static target lib/librte_pdump.a 00:02:17.921 [521/743] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.921 [522/743] Linking target lib/librte_pdump.so.23.0 00:02:18.178 [523/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:18.178 [524/743] Generating lib/rte_table_def with a custom command 00:02:18.178 [525/743] Generating lib/rte_table_mingw with a custom command 00:02:18.435 [526/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:18.435 [527/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:18.435 [528/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:18.693 [529/743] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:18.693 [530/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:18.950 [531/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:18.950 [532/743] Generating lib/rte_pipeline_def with a custom command 00:02:18.950 [533/743] Generating lib/rte_pipeline_mingw with a custom command 00:02:18.950 [534/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:18.950 [535/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:18.950 [536/743] Linking static target lib/librte_table.a 00:02:18.950 [537/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:19.516 [538/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:19.516 [539/743] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.774 [540/743] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:19.774 [541/743] Linking target lib/librte_table.so.23.0 00:02:19.774 [542/743] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:19.774 [543/743] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:19.774 [544/743] Generating lib/rte_graph_def with a custom command 00:02:19.774 [545/743] Generating lib/rte_graph_mingw with a custom command 00:02:19.774 [546/743] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:20.032 [547/743] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:20.032 [548/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:20.289 [549/743] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:20.547 [550/743] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:20.547 [551/743] Linking static target lib/librte_graph.a 00:02:20.547 [552/743] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:20.804 [553/743] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:20.804 [554/743] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:20.804 [555/743] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:21.062 [556/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:21.062 [557/743] Generating lib/rte_node_def with a custom command 00:02:21.062 [558/743] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:21.062 [559/743] Generating lib/rte_node_mingw with a custom command 00:02:21.320 [560/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:21.320 [561/743] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:21.320 [562/743] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.320 [563/743] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:21.320 [564/743] Linking target lib/librte_graph.so.23.0 00:02:21.321 [565/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:21.578 [566/743] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:21.578 [567/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:21.578 [568/743] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:21.578 [569/743] Generating drivers/rte_bus_pci_def with a custom command 00:02:21.578 [570/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:21.578 [571/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:21.578 [572/743] Generating drivers/rte_bus_vdev_def with a custom command 00:02:21.578 [573/743] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:21.578 [574/743] Generating drivers/rte_mempool_ring_def with a custom command 00:02:21.578 [575/743] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:21.835 [576/743] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:21.835 [577/743] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:21.835 [578/743] Linking static target lib/librte_node.a 00:02:21.835 [579/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:21.835 [580/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:21.835 [581/743] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:22.093 [582/743] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.093 [583/743] Linking target lib/librte_node.so.23.0 00:02:22.093 [584/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:22.093 [585/743] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:22.093 [586/743] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:22.093 [587/743] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:22.093 [588/743] Linking static target drivers/librte_bus_vdev.a 00:02:22.350 [589/743] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:22.350 [590/743] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:22.350 [591/743] Linking static target drivers/librte_bus_pci.a 00:02:22.350 [592/743] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.350 [593/743] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:22.350 [594/743] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:22.350 [595/743] Linking target drivers/librte_bus_vdev.so.23.0 00:02:22.607 [596/743] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:22.607 [597/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:22.607 [598/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:22.607 [599/743] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.864 [600/743] Linking target drivers/librte_bus_pci.so.23.0 00:02:22.864 [601/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:22.864 [602/743] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:23.121 [603/743] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:23.121 [604/743] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:23.121 [605/743] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:23.121 [606/743] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:23.121 [607/743] Linking static target drivers/librte_mempool_ring.a 00:02:23.121 [608/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:23.121 [609/743] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:23.377 [610/743] Linking target drivers/librte_mempool_ring.so.23.0 00:02:23.633 [611/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:23.891 [612/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:24.148 [613/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:24.148 [614/743] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:24.429 [615/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:24.687 [616/743] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:24.687 [617/743] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:25.252 [618/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:25.252 [619/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:25.252 [620/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:25.509 [621/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:25.509 [622/743] Generating drivers/rte_net_i40e_def with a custom command 00:02:25.509 [623/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:25.509 [624/743] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:25.510 [625/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:26.881 [626/743] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:26.881 [627/743] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:26.881 [628/743] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:27.139 [629/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:27.139 [630/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:27.139 [631/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:27.139 [632/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:27.139 [633/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:27.139 [634/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:27.396 [635/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:27.654 [636/743] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:27.912 [637/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:27.912 [638/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:27.912 [639/743] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:28.169 [640/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:28.427 [641/743] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:28.427 [642/743] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:28.427 [643/743] Linking static target lib/librte_vhost.a 00:02:28.427 [644/743] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:28.427 [645/743] Linking static target drivers/librte_net_i40e.a 00:02:28.427 [646/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:28.427 [647/743] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:28.427 [648/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:28.685 [649/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:28.943 [650/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:28.943 [651/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:29.201 [652/743] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.201 [653/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:29.201 [654/743] Linking target drivers/librte_net_i40e.so.23.0 00:02:29.201 [655/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:29.458 [656/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:29.459 [657/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:29.716 [658/743] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.716 [659/743] Linking target lib/librte_vhost.so.23.0 00:02:29.973 [660/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:29.973 [661/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:29.973 [662/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:30.230 [663/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:30.230 [664/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:30.230 [665/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:30.230 [666/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:30.488 [667/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:30.488 [668/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:30.488 [669/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:30.745 [670/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:31.003 [671/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:31.003 [672/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:31.003 [673/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:31.569 [674/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:31.826 [675/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:31.826 [676/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:32.083 [677/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:32.083 [678/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:32.341 [679/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:32.341 [680/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:32.341 [681/743] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:32.598 [682/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:32.856 [683/743] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:32.856 [684/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:32.856 [685/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:32.856 [686/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:33.113 [687/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:33.113 [688/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:33.372 [689/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:33.372 [690/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:33.372 [691/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:33.372 [692/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:33.372 [693/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:33.372 [694/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:33.937 [695/743] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:33.937 [696/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:34.195 [697/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:34.452 [698/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:34.452 [699/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:35.018 [700/743] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:35.018 [701/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:35.018 [702/743] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:35.018 [703/743] Linking static target lib/librte_pipeline.a 00:02:35.018 [704/743] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:35.276 [705/743] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:35.276 [706/743] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:35.276 [707/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:35.533 [708/743] Linking target app/dpdk-dumpcap 00:02:35.533 [709/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:35.791 [710/743] Linking target app/dpdk-pdump 00:02:35.791 [711/743] Linking target app/dpdk-proc-info 00:02:35.791 [712/743] Linking target app/dpdk-test-acl 00:02:35.791 [713/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:35.791 [714/743] Linking target app/dpdk-test-bbdev 00:02:36.048 [715/743] Linking target app/dpdk-test-cmdline 00:02:36.048 [716/743] Linking target app/dpdk-test-compress-perf 00:02:36.306 [717/743] Linking target app/dpdk-test-crypto-perf 00:02:36.306 [718/743] Linking target app/dpdk-test-eventdev 00:02:36.306 [719/743] Linking target app/dpdk-test-fib 00:02:36.306 [720/743] Linking target app/dpdk-test-flow-perf 00:02:36.563 [721/743] Linking target app/dpdk-test-gpudev 00:02:36.563 [722/743] Linking target app/dpdk-test-pipeline 00:02:36.821 [723/743] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:37.089 [724/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:37.089 [725/743] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:37.089 [726/743] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:37.357 [727/743] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:37.615 [728/743] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:37.615 [729/743] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:37.615 [730/743] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:37.615 [731/743] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.615 [732/743] Linking target lib/librte_pipeline.so.23.0 00:02:37.872 [733/743] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:38.130 [734/743] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:38.130 [735/743] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:38.130 [736/743] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:38.388 [737/743] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:38.388 [738/743] Linking target app/dpdk-test-sad 00:02:38.645 [739/743] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:38.645 [740/743] Linking target app/dpdk-test-regex 00:02:38.903 [741/743] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:39.161 [742/743] Linking target app/dpdk-testpmd 00:02:39.418 [743/743] Linking target app/dpdk-test-security-perf 00:02:39.418 01:43:49 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:02:39.418 01:43:49 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:39.418 01:43:49 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:39.418 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:39.418 [0/1] Installing files. 00:02:39.677 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.677 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.678 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.939 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.939 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.939 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.939 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.939 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.939 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.939 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.939 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.939 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.939 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.939 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.939 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.939 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.940 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.941 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:39.942 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:39.943 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:39.943 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:39.943 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:39.943 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:39.943 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:39.943 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:39.943 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:39.943 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:39.943 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:39.943 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:39.943 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:39.943 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:39.943 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:39.943 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:39.943 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:39.943 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:40.205 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:40.205 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:40.205 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.205 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:40.205 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.205 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.205 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.205 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.205 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.205 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.205 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.205 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.205 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.205 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.205 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.205 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.205 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.205 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.205 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.206 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.206 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.206 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.207 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:40.208 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:40.208 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:02:40.208 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:40.208 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:02:40.208 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:40.208 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:02:40.208 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:40.208 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:02:40.208 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:40.208 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:02:40.209 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:40.209 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:02:40.209 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:40.209 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:02:40.209 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:40.209 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:02:40.209 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:40.209 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:02:40.209 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:40.209 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:02:40.209 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:40.209 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:02:40.209 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:40.209 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:02:40.209 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:40.209 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:02:40.209 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:40.209 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:02:40.209 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:40.209 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:02:40.209 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:40.209 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:02:40.209 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:40.209 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:02:40.209 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:40.209 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:02:40.209 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:40.209 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:02:40.209 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:40.209 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:02:40.209 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:40.209 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:02:40.209 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:40.209 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:02:40.209 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:40.209 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:02:40.209 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:40.209 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:02:40.209 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:40.209 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:02:40.209 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:40.209 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:02:40.209 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:40.209 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:40.209 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:40.209 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:40.209 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:40.209 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:40.209 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:40.209 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:40.209 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:40.209 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:40.209 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:40.209 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:40.209 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:40.209 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:02:40.209 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:40.209 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:02:40.209 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:40.209 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:02:40.209 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:40.209 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:02:40.209 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:40.209 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:02:40.209 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:40.209 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:02:40.209 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:40.209 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:02:40.209 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:40.209 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:02:40.209 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:40.209 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:02:40.209 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:40.209 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:02:40.209 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:40.209 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:02:40.209 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:40.209 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:02:40.209 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:40.209 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:02:40.209 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:40.210 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:02:40.210 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:40.210 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:02:40.210 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:40.210 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:02:40.210 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:40.210 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:02:40.210 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:40.210 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:02:40.210 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:40.210 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:02:40.210 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:40.210 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:02:40.210 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:40.210 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:02:40.210 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:40.210 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:02:40.210 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:40.210 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:02:40.210 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:40.210 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:02:40.210 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:40.210 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:02:40.210 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:40.210 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:02:40.210 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:40.210 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:40.210 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:40.210 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:40.210 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:40.210 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:40.210 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:40.210 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:40.210 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:40.210 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:40.468 01:43:50 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:02:40.468 ************************************ 00:02:40.468 END TEST build_native_dpdk 00:02:40.468 ************************************ 00:02:40.468 01:43:50 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:40.468 00:02:40.468 real 0m51.937s 00:02:40.468 user 6m12.498s 00:02:40.468 sys 0m55.779s 00:02:40.468 01:43:50 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:40.468 01:43:50 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:40.468 01:43:50 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:40.468 01:43:50 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:40.468 01:43:50 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:40.468 01:43:50 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:40.468 01:43:50 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:40.468 01:43:50 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:40.468 01:43:50 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:40.468 01:43:50 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:02:40.468 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:40.726 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:40.726 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:40.726 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:40.983 Using 'verbs' RDMA provider 00:02:54.552 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:09.430 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:09.430 Creating mk/config.mk...done. 00:03:09.430 Creating mk/cc.flags.mk...done. 00:03:09.430 Type 'make' to build. 00:03:09.430 01:44:18 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:09.430 01:44:18 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:09.430 01:44:18 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:09.430 01:44:18 -- common/autotest_common.sh@10 -- $ set +x 00:03:09.430 ************************************ 00:03:09.430 START TEST make 00:03:09.430 ************************************ 00:03:09.430 01:44:18 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:09.430 make[1]: Nothing to be done for 'all'. 00:04:05.639 CC lib/log/log.o 00:04:05.639 CC lib/log/log_flags.o 00:04:05.639 CC lib/log/log_deprecated.o 00:04:05.639 CC lib/ut_mock/mock.o 00:04:05.639 CC lib/ut/ut.o 00:04:05.639 LIB libspdk_ut.a 00:04:05.639 LIB libspdk_ut_mock.a 00:04:05.639 LIB libspdk_log.a 00:04:05.639 SO libspdk_ut.so.2.0 00:04:05.639 SO libspdk_ut_mock.so.6.0 00:04:05.639 SO libspdk_log.so.7.1 00:04:05.639 SYMLINK libspdk_ut.so 00:04:05.639 SYMLINK libspdk_ut_mock.so 00:04:05.639 SYMLINK libspdk_log.so 00:04:05.639 CC lib/util/bit_array.o 00:04:05.639 CC lib/dma/dma.o 00:04:05.639 CC lib/util/base64.o 00:04:05.639 CC lib/util/cpuset.o 00:04:05.639 CC lib/util/crc16.o 00:04:05.639 CC lib/util/crc32.o 00:04:05.639 CXX lib/trace_parser/trace.o 00:04:05.639 CC lib/util/crc32c.o 00:04:05.639 CC lib/ioat/ioat.o 00:04:05.639 CC lib/vfio_user/host/vfio_user_pci.o 00:04:05.639 CC lib/vfio_user/host/vfio_user.o 00:04:05.639 CC lib/util/crc32_ieee.o 00:04:05.639 CC lib/util/crc64.o 00:04:05.639 CC lib/util/dif.o 00:04:05.639 CC lib/util/fd.o 00:04:05.639 CC lib/util/fd_group.o 00:04:05.639 LIB libspdk_dma.a 00:04:05.639 SO libspdk_dma.so.5.0 00:04:05.639 CC lib/util/file.o 00:04:05.639 CC lib/util/hexlify.o 00:04:05.639 CC lib/util/iov.o 00:04:05.639 CC lib/util/math.o 00:04:05.639 LIB libspdk_ioat.a 00:04:05.639 LIB libspdk_vfio_user.a 00:04:05.639 SYMLINK libspdk_dma.so 00:04:05.639 CC lib/util/net.o 00:04:05.639 SO libspdk_ioat.so.7.0 00:04:05.639 SO libspdk_vfio_user.so.5.0 00:04:05.639 SYMLINK libspdk_ioat.so 00:04:05.639 SYMLINK libspdk_vfio_user.so 00:04:05.639 CC lib/util/pipe.o 00:04:05.639 CC lib/util/strerror_tls.o 00:04:05.639 CC lib/util/string.o 00:04:05.639 CC lib/util/uuid.o 00:04:05.639 CC lib/util/xor.o 00:04:05.639 CC lib/util/zipf.o 00:04:05.639 CC lib/util/md5.o 00:04:05.639 LIB libspdk_util.a 00:04:05.639 SO libspdk_util.so.10.1 00:04:05.639 SYMLINK libspdk_util.so 00:04:05.639 LIB libspdk_trace_parser.a 00:04:05.639 SO libspdk_trace_parser.so.6.0 00:04:05.639 SYMLINK libspdk_trace_parser.so 00:04:05.640 CC lib/json/json_parse.o 00:04:05.640 CC lib/env_dpdk/env.o 00:04:05.640 CC lib/rdma_utils/rdma_utils.o 00:04:05.640 CC lib/json/json_util.o 00:04:05.640 CC lib/env_dpdk/memory.o 00:04:05.640 CC lib/idxd/idxd.o 00:04:05.640 CC lib/idxd/idxd_user.o 00:04:05.640 CC lib/json/json_write.o 00:04:05.640 CC lib/conf/conf.o 00:04:05.640 CC lib/vmd/vmd.o 00:04:05.640 LIB libspdk_conf.a 00:04:05.640 CC lib/vmd/led.o 00:04:05.640 CC lib/idxd/idxd_kernel.o 00:04:05.640 SO libspdk_conf.so.6.0 00:04:05.640 CC lib/env_dpdk/pci.o 00:04:05.640 LIB libspdk_rdma_utils.a 00:04:05.640 LIB libspdk_json.a 00:04:05.640 SO libspdk_rdma_utils.so.1.0 00:04:05.640 SYMLINK libspdk_conf.so 00:04:05.640 CC lib/env_dpdk/init.o 00:04:05.640 SO libspdk_json.so.6.0 00:04:05.640 CC lib/env_dpdk/threads.o 00:04:05.640 SYMLINK libspdk_rdma_utils.so 00:04:05.640 CC lib/env_dpdk/pci_ioat.o 00:04:05.640 SYMLINK libspdk_json.so 00:04:05.640 CC lib/env_dpdk/pci_virtio.o 00:04:05.640 CC lib/env_dpdk/pci_vmd.o 00:04:05.640 CC lib/env_dpdk/pci_idxd.o 00:04:05.640 CC lib/env_dpdk/pci_event.o 00:04:05.640 CC lib/rdma_provider/common.o 00:04:05.640 CC lib/jsonrpc/jsonrpc_server.o 00:04:05.640 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:05.640 LIB libspdk_vmd.a 00:04:05.640 CC lib/jsonrpc/jsonrpc_client.o 00:04:05.640 LIB libspdk_idxd.a 00:04:05.640 CC lib/env_dpdk/sigbus_handler.o 00:04:05.640 SO libspdk_vmd.so.6.0 00:04:05.640 SO libspdk_idxd.so.12.1 00:04:05.640 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:05.640 CC lib/env_dpdk/pci_dpdk.o 00:04:05.640 SYMLINK libspdk_vmd.so 00:04:05.640 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:05.640 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:05.640 SYMLINK libspdk_idxd.so 00:04:05.640 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:05.640 LIB libspdk_jsonrpc.a 00:04:05.640 LIB libspdk_rdma_provider.a 00:04:05.640 SO libspdk_jsonrpc.so.6.0 00:04:05.640 SO libspdk_rdma_provider.so.7.0 00:04:05.640 SYMLINK libspdk_jsonrpc.so 00:04:05.640 SYMLINK libspdk_rdma_provider.so 00:04:05.640 CC lib/rpc/rpc.o 00:04:05.640 LIB libspdk_env_dpdk.a 00:04:05.640 SO libspdk_env_dpdk.so.15.1 00:04:05.640 LIB libspdk_rpc.a 00:04:05.640 SO libspdk_rpc.so.6.0 00:04:05.640 SYMLINK libspdk_rpc.so 00:04:05.640 SYMLINK libspdk_env_dpdk.so 00:04:05.640 CC lib/keyring/keyring.o 00:04:05.640 CC lib/keyring/keyring_rpc.o 00:04:05.640 CC lib/notify/notify.o 00:04:05.640 CC lib/notify/notify_rpc.o 00:04:05.640 CC lib/trace/trace.o 00:04:05.640 CC lib/trace/trace_flags.o 00:04:05.640 CC lib/trace/trace_rpc.o 00:04:05.640 LIB libspdk_notify.a 00:04:05.640 SO libspdk_notify.so.6.0 00:04:05.640 LIB libspdk_keyring.a 00:04:05.640 SYMLINK libspdk_notify.so 00:04:05.640 SO libspdk_keyring.so.2.0 00:04:05.640 LIB libspdk_trace.a 00:04:05.640 SO libspdk_trace.so.11.0 00:04:05.640 SYMLINK libspdk_keyring.so 00:04:05.640 SYMLINK libspdk_trace.so 00:04:05.640 CC lib/thread/thread.o 00:04:05.640 CC lib/thread/iobuf.o 00:04:05.640 CC lib/sock/sock.o 00:04:05.640 CC lib/sock/sock_rpc.o 00:04:05.640 LIB libspdk_sock.a 00:04:05.640 SO libspdk_sock.so.10.0 00:04:05.640 SYMLINK libspdk_sock.so 00:04:05.640 CC lib/nvme/nvme_ctrlr.o 00:04:05.640 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:05.640 CC lib/nvme/nvme_ns_cmd.o 00:04:05.640 CC lib/nvme/nvme_fabric.o 00:04:05.640 CC lib/nvme/nvme_ns.o 00:04:05.640 CC lib/nvme/nvme_qpair.o 00:04:05.640 CC lib/nvme/nvme_pcie_common.o 00:04:05.640 CC lib/nvme/nvme_pcie.o 00:04:05.640 CC lib/nvme/nvme.o 00:04:05.640 LIB libspdk_thread.a 00:04:05.640 SO libspdk_thread.so.11.0 00:04:05.640 CC lib/nvme/nvme_quirks.o 00:04:05.640 CC lib/nvme/nvme_transport.o 00:04:05.640 SYMLINK libspdk_thread.so 00:04:05.640 CC lib/nvme/nvme_discovery.o 00:04:05.640 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:05.640 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:05.640 CC lib/accel/accel.o 00:04:05.640 CC lib/nvme/nvme_tcp.o 00:04:05.640 CC lib/nvme/nvme_opal.o 00:04:05.640 CC lib/nvme/nvme_io_msg.o 00:04:05.640 CC lib/nvme/nvme_poll_group.o 00:04:05.640 CC lib/accel/accel_rpc.o 00:04:05.640 CC lib/nvme/nvme_zns.o 00:04:05.640 CC lib/blob/blobstore.o 00:04:05.640 CC lib/accel/accel_sw.o 00:04:05.640 CC lib/init/json_config.o 00:04:05.640 CC lib/virtio/virtio.o 00:04:05.640 CC lib/init/subsystem.o 00:04:05.898 CC lib/init/subsystem_rpc.o 00:04:05.898 CC lib/init/rpc.o 00:04:05.898 CC lib/blob/request.o 00:04:05.898 LIB libspdk_accel.a 00:04:05.898 CC lib/nvme/nvme_stubs.o 00:04:05.898 SO libspdk_accel.so.16.0 00:04:05.898 CC lib/virtio/virtio_vhost_user.o 00:04:05.898 CC lib/nvme/nvme_auth.o 00:04:05.898 SYMLINK libspdk_accel.so 00:04:06.157 LIB libspdk_init.a 00:04:06.157 CC lib/nvme/nvme_cuse.o 00:04:06.157 SO libspdk_init.so.6.0 00:04:06.157 CC lib/nvme/nvme_rdma.o 00:04:06.157 SYMLINK libspdk_init.so 00:04:06.157 CC lib/virtio/virtio_vfio_user.o 00:04:06.157 CC lib/fsdev/fsdev.o 00:04:06.157 CC lib/fsdev/fsdev_io.o 00:04:06.415 CC lib/fsdev/fsdev_rpc.o 00:04:06.415 CC lib/virtio/virtio_pci.o 00:04:06.415 CC lib/bdev/bdev.o 00:04:06.415 CC lib/bdev/bdev_rpc.o 00:04:06.674 CC lib/event/app.o 00:04:06.674 CC lib/bdev/bdev_zone.o 00:04:06.674 LIB libspdk_virtio.a 00:04:06.933 SO libspdk_virtio.so.7.0 00:04:06.933 CC lib/bdev/part.o 00:04:06.933 CC lib/bdev/scsi_nvme.o 00:04:06.933 SYMLINK libspdk_virtio.so 00:04:06.933 CC lib/event/reactor.o 00:04:06.933 LIB libspdk_fsdev.a 00:04:06.933 SO libspdk_fsdev.so.2.0 00:04:06.933 CC lib/blob/zeroes.o 00:04:06.933 CC lib/event/log_rpc.o 00:04:06.933 SYMLINK libspdk_fsdev.so 00:04:06.933 CC lib/event/app_rpc.o 00:04:06.933 CC lib/event/scheduler_static.o 00:04:06.933 CC lib/blob/blob_bs_dev.o 00:04:07.501 LIB libspdk_event.a 00:04:07.501 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:07.501 SO libspdk_event.so.14.0 00:04:07.501 SYMLINK libspdk_event.so 00:04:07.501 LIB libspdk_nvme.a 00:04:07.759 SO libspdk_nvme.so.15.0 00:04:08.018 SYMLINK libspdk_nvme.so 00:04:08.018 LIB libspdk_fuse_dispatcher.a 00:04:08.018 SO libspdk_fuse_dispatcher.so.1.0 00:04:08.018 SYMLINK libspdk_fuse_dispatcher.so 00:04:08.587 LIB libspdk_blob.a 00:04:08.587 SO libspdk_blob.so.11.0 00:04:08.587 SYMLINK libspdk_blob.so 00:04:08.845 CC lib/blobfs/blobfs.o 00:04:08.845 CC lib/blobfs/tree.o 00:04:08.845 CC lib/lvol/lvol.o 00:04:09.103 LIB libspdk_bdev.a 00:04:09.362 SO libspdk_bdev.so.17.0 00:04:09.362 SYMLINK libspdk_bdev.so 00:04:09.621 CC lib/scsi/dev.o 00:04:09.621 CC lib/scsi/lun.o 00:04:09.621 CC lib/scsi/port.o 00:04:09.621 CC lib/scsi/scsi.o 00:04:09.621 CC lib/nbd/nbd.o 00:04:09.621 CC lib/ublk/ublk.o 00:04:09.621 CC lib/nvmf/ctrlr.o 00:04:09.621 CC lib/ftl/ftl_core.o 00:04:09.621 CC lib/ftl/ftl_init.o 00:04:09.879 CC lib/ftl/ftl_layout.o 00:04:09.879 LIB libspdk_blobfs.a 00:04:09.879 SO libspdk_blobfs.so.10.0 00:04:09.879 CC lib/nvmf/ctrlr_discovery.o 00:04:09.879 SYMLINK libspdk_blobfs.so 00:04:09.879 CC lib/nvmf/ctrlr_bdev.o 00:04:09.879 CC lib/scsi/scsi_bdev.o 00:04:09.879 LIB libspdk_lvol.a 00:04:09.879 CC lib/nvmf/subsystem.o 00:04:09.879 SO libspdk_lvol.so.10.0 00:04:10.137 CC lib/nbd/nbd_rpc.o 00:04:10.137 SYMLINK libspdk_lvol.so 00:04:10.137 CC lib/ublk/ublk_rpc.o 00:04:10.137 CC lib/ftl/ftl_debug.o 00:04:10.137 CC lib/ftl/ftl_io.o 00:04:10.137 LIB libspdk_nbd.a 00:04:10.137 CC lib/ftl/ftl_sb.o 00:04:10.137 LIB libspdk_ublk.a 00:04:10.137 SO libspdk_nbd.so.7.0 00:04:10.395 SO libspdk_ublk.so.3.0 00:04:10.395 CC lib/ftl/ftl_l2p.o 00:04:10.395 SYMLINK libspdk_nbd.so 00:04:10.395 CC lib/ftl/ftl_l2p_flat.o 00:04:10.395 SYMLINK libspdk_ublk.so 00:04:10.395 CC lib/ftl/ftl_nv_cache.o 00:04:10.395 CC lib/nvmf/nvmf.o 00:04:10.395 CC lib/scsi/scsi_pr.o 00:04:10.395 CC lib/nvmf/nvmf_rpc.o 00:04:10.395 CC lib/nvmf/transport.o 00:04:10.654 CC lib/ftl/ftl_band.o 00:04:10.654 CC lib/ftl/ftl_band_ops.o 00:04:10.654 CC lib/ftl/ftl_writer.o 00:04:10.654 CC lib/scsi/scsi_rpc.o 00:04:10.911 CC lib/scsi/task.o 00:04:10.911 CC lib/nvmf/tcp.o 00:04:10.911 CC lib/ftl/ftl_rq.o 00:04:10.911 CC lib/nvmf/stubs.o 00:04:11.170 LIB libspdk_scsi.a 00:04:11.170 CC lib/nvmf/mdns_server.o 00:04:11.170 SO libspdk_scsi.so.9.0 00:04:11.170 CC lib/nvmf/rdma.o 00:04:11.170 CC lib/nvmf/auth.o 00:04:11.170 SYMLINK libspdk_scsi.so 00:04:11.170 CC lib/ftl/ftl_reloc.o 00:04:11.170 CC lib/ftl/ftl_l2p_cache.o 00:04:11.428 CC lib/ftl/ftl_p2l.o 00:04:11.428 CC lib/ftl/ftl_p2l_log.o 00:04:11.428 CC lib/iscsi/conn.o 00:04:11.428 CC lib/vhost/vhost.o 00:04:11.686 CC lib/vhost/vhost_rpc.o 00:04:11.686 CC lib/vhost/vhost_scsi.o 00:04:11.686 CC lib/ftl/mngt/ftl_mngt.o 00:04:11.686 CC lib/vhost/vhost_blk.o 00:04:11.945 CC lib/vhost/rte_vhost_user.o 00:04:11.945 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:12.203 CC lib/iscsi/init_grp.o 00:04:12.203 CC lib/iscsi/iscsi.o 00:04:12.203 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:12.203 CC lib/iscsi/param.o 00:04:12.462 CC lib/iscsi/portal_grp.o 00:04:12.462 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:12.462 CC lib/iscsi/tgt_node.o 00:04:12.462 CC lib/iscsi/iscsi_subsystem.o 00:04:12.720 CC lib/iscsi/iscsi_rpc.o 00:04:12.720 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:12.720 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:12.720 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:12.720 CC lib/iscsi/task.o 00:04:12.978 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:12.978 LIB libspdk_vhost.a 00:04:12.978 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:12.978 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:12.978 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:12.978 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:12.978 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:12.978 CC lib/ftl/utils/ftl_conf.o 00:04:12.978 SO libspdk_vhost.so.8.0 00:04:13.236 SYMLINK libspdk_vhost.so 00:04:13.236 CC lib/ftl/utils/ftl_md.o 00:04:13.236 CC lib/ftl/utils/ftl_mempool.o 00:04:13.236 CC lib/ftl/utils/ftl_bitmap.o 00:04:13.236 CC lib/ftl/utils/ftl_property.o 00:04:13.236 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:13.236 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:13.495 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:13.495 LIB libspdk_nvmf.a 00:04:13.495 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:13.495 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:13.495 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:13.495 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:13.495 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:13.495 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:13.495 SO libspdk_nvmf.so.20.0 00:04:13.495 LIB libspdk_iscsi.a 00:04:13.753 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:13.753 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:13.753 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:13.753 SO libspdk_iscsi.so.8.0 00:04:13.753 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:13.753 CC lib/ftl/base/ftl_base_dev.o 00:04:13.753 CC lib/ftl/base/ftl_base_bdev.o 00:04:13.753 SYMLINK libspdk_nvmf.so 00:04:13.753 CC lib/ftl/ftl_trace.o 00:04:13.753 SYMLINK libspdk_iscsi.so 00:04:14.012 LIB libspdk_ftl.a 00:04:14.270 SO libspdk_ftl.so.9.0 00:04:14.529 SYMLINK libspdk_ftl.so 00:04:14.788 CC module/env_dpdk/env_dpdk_rpc.o 00:04:15.056 CC module/sock/posix/posix.o 00:04:15.056 CC module/accel/ioat/accel_ioat.o 00:04:15.056 CC module/accel/error/accel_error.o 00:04:15.056 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:15.056 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:15.056 CC module/fsdev/aio/fsdev_aio.o 00:04:15.056 CC module/sock/uring/uring.o 00:04:15.056 CC module/blob/bdev/blob_bdev.o 00:04:15.056 CC module/keyring/file/keyring.o 00:04:15.056 LIB libspdk_env_dpdk_rpc.a 00:04:15.056 SO libspdk_env_dpdk_rpc.so.6.0 00:04:15.056 SYMLINK libspdk_env_dpdk_rpc.so 00:04:15.056 CC module/keyring/file/keyring_rpc.o 00:04:15.378 CC module/accel/ioat/accel_ioat_rpc.o 00:04:15.378 CC module/accel/error/accel_error_rpc.o 00:04:15.378 LIB libspdk_scheduler_dpdk_governor.a 00:04:15.378 LIB libspdk_scheduler_dynamic.a 00:04:15.378 SO libspdk_scheduler_dynamic.so.4.0 00:04:15.378 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:15.378 LIB libspdk_blob_bdev.a 00:04:15.378 LIB libspdk_keyring_file.a 00:04:15.378 SYMLINK libspdk_scheduler_dynamic.so 00:04:15.378 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:15.378 SO libspdk_keyring_file.so.2.0 00:04:15.378 SO libspdk_blob_bdev.so.11.0 00:04:15.378 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:15.378 LIB libspdk_accel_ioat.a 00:04:15.378 LIB libspdk_accel_error.a 00:04:15.378 SO libspdk_accel_ioat.so.6.0 00:04:15.378 SYMLINK libspdk_keyring_file.so 00:04:15.378 SYMLINK libspdk_blob_bdev.so 00:04:15.378 SO libspdk_accel_error.so.2.0 00:04:15.378 CC module/fsdev/aio/linux_aio_mgr.o 00:04:15.378 CC module/keyring/linux/keyring.o 00:04:15.378 SYMLINK libspdk_accel_ioat.so 00:04:15.637 SYMLINK libspdk_accel_error.so 00:04:15.637 CC module/keyring/linux/keyring_rpc.o 00:04:15.637 CC module/scheduler/gscheduler/gscheduler.o 00:04:15.637 LIB libspdk_keyring_linux.a 00:04:15.637 SO libspdk_keyring_linux.so.1.0 00:04:15.637 LIB libspdk_sock_uring.a 00:04:15.637 CC module/accel/dsa/accel_dsa.o 00:04:15.637 LIB libspdk_fsdev_aio.a 00:04:15.637 CC module/bdev/delay/vbdev_delay.o 00:04:15.637 SO libspdk_sock_uring.so.5.0 00:04:15.637 CC module/blobfs/bdev/blobfs_bdev.o 00:04:15.637 LIB libspdk_scheduler_gscheduler.a 00:04:15.637 LIB libspdk_sock_posix.a 00:04:15.637 SO libspdk_fsdev_aio.so.1.0 00:04:15.895 SO libspdk_scheduler_gscheduler.so.4.0 00:04:15.895 SYMLINK libspdk_keyring_linux.so 00:04:15.895 SO libspdk_sock_posix.so.6.0 00:04:15.895 SYMLINK libspdk_sock_uring.so 00:04:15.895 CC module/accel/dsa/accel_dsa_rpc.o 00:04:15.895 CC module/bdev/error/vbdev_error.o 00:04:15.895 SYMLINK libspdk_fsdev_aio.so 00:04:15.895 CC module/bdev/error/vbdev_error_rpc.o 00:04:15.895 SYMLINK libspdk_scheduler_gscheduler.so 00:04:15.895 CC module/bdev/gpt/gpt.o 00:04:15.895 SYMLINK libspdk_sock_posix.so 00:04:15.895 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:15.895 CC module/accel/iaa/accel_iaa.o 00:04:15.895 CC module/bdev/lvol/vbdev_lvol.o 00:04:15.895 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:15.895 LIB libspdk_accel_dsa.a 00:04:16.154 CC module/bdev/malloc/bdev_malloc.o 00:04:16.154 SO libspdk_accel_dsa.so.5.0 00:04:16.154 CC module/bdev/gpt/vbdev_gpt.o 00:04:16.154 LIB libspdk_bdev_error.a 00:04:16.154 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:16.154 CC module/bdev/null/bdev_null.o 00:04:16.154 SYMLINK libspdk_accel_dsa.so 00:04:16.154 SO libspdk_bdev_error.so.6.0 00:04:16.154 LIB libspdk_blobfs_bdev.a 00:04:16.154 SO libspdk_blobfs_bdev.so.6.0 00:04:16.154 SYMLINK libspdk_bdev_error.so 00:04:16.154 LIB libspdk_bdev_delay.a 00:04:16.154 CC module/accel/iaa/accel_iaa_rpc.o 00:04:16.154 SYMLINK libspdk_blobfs_bdev.so 00:04:16.154 SO libspdk_bdev_delay.so.6.0 00:04:16.411 SYMLINK libspdk_bdev_delay.so 00:04:16.411 CC module/bdev/nvme/bdev_nvme.o 00:04:16.411 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:16.411 LIB libspdk_bdev_gpt.a 00:04:16.411 LIB libspdk_accel_iaa.a 00:04:16.411 SO libspdk_bdev_gpt.so.6.0 00:04:16.411 CC module/bdev/passthru/vbdev_passthru.o 00:04:16.411 CC module/bdev/raid/bdev_raid.o 00:04:16.411 SO libspdk_accel_iaa.so.3.0 00:04:16.411 LIB libspdk_bdev_malloc.a 00:04:16.411 CC module/bdev/null/bdev_null_rpc.o 00:04:16.411 SO libspdk_bdev_malloc.so.6.0 00:04:16.411 SYMLINK libspdk_bdev_gpt.so 00:04:16.411 CC module/bdev/split/vbdev_split.o 00:04:16.411 SYMLINK libspdk_accel_iaa.so 00:04:16.411 CC module/bdev/split/vbdev_split_rpc.o 00:04:16.411 SYMLINK libspdk_bdev_malloc.so 00:04:16.668 LIB libspdk_bdev_null.a 00:04:16.668 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:16.668 SO libspdk_bdev_null.so.6.0 00:04:16.668 CC module/bdev/uring/bdev_uring.o 00:04:16.668 LIB libspdk_bdev_lvol.a 00:04:16.668 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:16.668 CC module/bdev/uring/bdev_uring_rpc.o 00:04:16.668 SYMLINK libspdk_bdev_null.so 00:04:16.668 CC module/bdev/aio/bdev_aio.o 00:04:16.668 SO libspdk_bdev_lvol.so.6.0 00:04:16.668 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:16.668 LIB libspdk_bdev_split.a 00:04:16.668 SO libspdk_bdev_split.so.6.0 00:04:16.926 SYMLINK libspdk_bdev_lvol.so 00:04:16.926 CC module/bdev/nvme/nvme_rpc.o 00:04:16.926 SYMLINK libspdk_bdev_split.so 00:04:16.926 CC module/bdev/nvme/bdev_mdns_client.o 00:04:16.926 LIB libspdk_bdev_passthru.a 00:04:16.926 SO libspdk_bdev_passthru.so.6.0 00:04:16.926 CC module/bdev/nvme/vbdev_opal.o 00:04:16.926 SYMLINK libspdk_bdev_passthru.so 00:04:16.926 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:17.183 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:17.183 LIB libspdk_bdev_uring.a 00:04:17.183 SO libspdk_bdev_uring.so.6.0 00:04:17.183 CC module/bdev/aio/bdev_aio_rpc.o 00:04:17.183 SYMLINK libspdk_bdev_uring.so 00:04:17.183 LIB libspdk_bdev_zone_block.a 00:04:17.183 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:17.183 CC module/bdev/raid/bdev_raid_rpc.o 00:04:17.183 SO libspdk_bdev_zone_block.so.6.0 00:04:17.441 CC module/bdev/ftl/bdev_ftl.o 00:04:17.441 LIB libspdk_bdev_aio.a 00:04:17.441 SYMLINK libspdk_bdev_zone_block.so 00:04:17.441 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:17.441 CC module/bdev/iscsi/bdev_iscsi.o 00:04:17.441 SO libspdk_bdev_aio.so.6.0 00:04:17.441 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:17.441 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:17.441 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:17.441 SYMLINK libspdk_bdev_aio.so 00:04:17.441 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:17.441 CC module/bdev/raid/bdev_raid_sb.o 00:04:17.441 CC module/bdev/raid/raid0.o 00:04:17.699 CC module/bdev/raid/raid1.o 00:04:17.699 LIB libspdk_bdev_ftl.a 00:04:17.699 CC module/bdev/raid/concat.o 00:04:17.699 SO libspdk_bdev_ftl.so.6.0 00:04:17.699 SYMLINK libspdk_bdev_ftl.so 00:04:17.699 LIB libspdk_bdev_iscsi.a 00:04:17.968 SO libspdk_bdev_iscsi.so.6.0 00:04:17.968 LIB libspdk_bdev_raid.a 00:04:17.968 SYMLINK libspdk_bdev_iscsi.so 00:04:17.968 SO libspdk_bdev_raid.so.6.0 00:04:17.968 LIB libspdk_bdev_virtio.a 00:04:17.968 SO libspdk_bdev_virtio.so.6.0 00:04:17.968 SYMLINK libspdk_bdev_raid.so 00:04:18.226 SYMLINK libspdk_bdev_virtio.so 00:04:19.160 LIB libspdk_bdev_nvme.a 00:04:19.160 SO libspdk_bdev_nvme.so.7.1 00:04:19.160 SYMLINK libspdk_bdev_nvme.so 00:04:19.727 CC module/event/subsystems/vmd/vmd.o 00:04:19.727 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:19.727 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:19.727 CC module/event/subsystems/iobuf/iobuf.o 00:04:19.727 CC module/event/subsystems/keyring/keyring.o 00:04:19.727 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:19.727 CC module/event/subsystems/scheduler/scheduler.o 00:04:19.727 CC module/event/subsystems/fsdev/fsdev.o 00:04:19.727 CC module/event/subsystems/sock/sock.o 00:04:19.727 LIB libspdk_event_vhost_blk.a 00:04:19.727 LIB libspdk_event_keyring.a 00:04:19.985 LIB libspdk_event_scheduler.a 00:04:19.985 LIB libspdk_event_fsdev.a 00:04:19.985 LIB libspdk_event_vmd.a 00:04:19.985 SO libspdk_event_vhost_blk.so.3.0 00:04:19.985 SO libspdk_event_keyring.so.1.0 00:04:19.985 SO libspdk_event_scheduler.so.4.0 00:04:19.985 LIB libspdk_event_iobuf.a 00:04:19.985 SO libspdk_event_fsdev.so.1.0 00:04:19.985 LIB libspdk_event_sock.a 00:04:19.985 SO libspdk_event_vmd.so.6.0 00:04:19.985 SO libspdk_event_iobuf.so.3.0 00:04:19.985 SO libspdk_event_sock.so.5.0 00:04:19.985 SYMLINK libspdk_event_scheduler.so 00:04:19.985 SYMLINK libspdk_event_vhost_blk.so 00:04:19.985 SYMLINK libspdk_event_keyring.so 00:04:19.985 SYMLINK libspdk_event_fsdev.so 00:04:19.985 SYMLINK libspdk_event_vmd.so 00:04:19.985 SYMLINK libspdk_event_sock.so 00:04:19.985 SYMLINK libspdk_event_iobuf.so 00:04:20.244 CC module/event/subsystems/accel/accel.o 00:04:20.503 LIB libspdk_event_accel.a 00:04:20.503 SO libspdk_event_accel.so.6.0 00:04:20.503 SYMLINK libspdk_event_accel.so 00:04:20.762 CC module/event/subsystems/bdev/bdev.o 00:04:21.021 LIB libspdk_event_bdev.a 00:04:21.021 SO libspdk_event_bdev.so.6.0 00:04:21.021 SYMLINK libspdk_event_bdev.so 00:04:21.279 CC module/event/subsystems/scsi/scsi.o 00:04:21.279 CC module/event/subsystems/nbd/nbd.o 00:04:21.279 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:21.279 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:21.279 CC module/event/subsystems/ublk/ublk.o 00:04:21.538 LIB libspdk_event_nbd.a 00:04:21.538 LIB libspdk_event_scsi.a 00:04:21.538 LIB libspdk_event_ublk.a 00:04:21.538 SO libspdk_event_nbd.so.6.0 00:04:21.538 SO libspdk_event_scsi.so.6.0 00:04:21.538 SO libspdk_event_ublk.so.3.0 00:04:21.538 SYMLINK libspdk_event_nbd.so 00:04:21.538 SYMLINK libspdk_event_scsi.so 00:04:21.538 SYMLINK libspdk_event_ublk.so 00:04:21.538 LIB libspdk_event_nvmf.a 00:04:21.538 SO libspdk_event_nvmf.so.6.0 00:04:21.796 SYMLINK libspdk_event_nvmf.so 00:04:21.796 CC module/event/subsystems/iscsi/iscsi.o 00:04:21.796 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:22.055 LIB libspdk_event_vhost_scsi.a 00:04:22.055 LIB libspdk_event_iscsi.a 00:04:22.055 SO libspdk_event_vhost_scsi.so.3.0 00:04:22.055 SO libspdk_event_iscsi.so.6.0 00:04:22.055 SYMLINK libspdk_event_vhost_scsi.so 00:04:22.055 SYMLINK libspdk_event_iscsi.so 00:04:22.312 SO libspdk.so.6.0 00:04:22.312 SYMLINK libspdk.so 00:04:22.569 CC app/trace_record/trace_record.o 00:04:22.569 CXX app/trace/trace.o 00:04:22.569 TEST_HEADER include/spdk/accel.h 00:04:22.569 TEST_HEADER include/spdk/accel_module.h 00:04:22.569 TEST_HEADER include/spdk/assert.h 00:04:22.569 TEST_HEADER include/spdk/barrier.h 00:04:22.569 TEST_HEADER include/spdk/base64.h 00:04:22.569 TEST_HEADER include/spdk/bdev.h 00:04:22.569 TEST_HEADER include/spdk/bdev_module.h 00:04:22.569 TEST_HEADER include/spdk/bdev_zone.h 00:04:22.569 TEST_HEADER include/spdk/bit_array.h 00:04:22.569 TEST_HEADER include/spdk/bit_pool.h 00:04:22.569 TEST_HEADER include/spdk/blob_bdev.h 00:04:22.569 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:22.569 TEST_HEADER include/spdk/blobfs.h 00:04:22.569 TEST_HEADER include/spdk/blob.h 00:04:22.569 TEST_HEADER include/spdk/conf.h 00:04:22.569 TEST_HEADER include/spdk/config.h 00:04:22.569 TEST_HEADER include/spdk/cpuset.h 00:04:22.569 TEST_HEADER include/spdk/crc16.h 00:04:22.569 TEST_HEADER include/spdk/crc32.h 00:04:22.569 TEST_HEADER include/spdk/crc64.h 00:04:22.569 CC app/iscsi_tgt/iscsi_tgt.o 00:04:22.569 TEST_HEADER include/spdk/dif.h 00:04:22.569 TEST_HEADER include/spdk/dma.h 00:04:22.569 CC app/nvmf_tgt/nvmf_main.o 00:04:22.569 TEST_HEADER include/spdk/endian.h 00:04:22.569 TEST_HEADER include/spdk/env_dpdk.h 00:04:22.569 TEST_HEADER include/spdk/env.h 00:04:22.569 TEST_HEADER include/spdk/event.h 00:04:22.569 TEST_HEADER include/spdk/fd_group.h 00:04:22.569 TEST_HEADER include/spdk/fd.h 00:04:22.569 TEST_HEADER include/spdk/file.h 00:04:22.569 TEST_HEADER include/spdk/fsdev.h 00:04:22.569 TEST_HEADER include/spdk/fsdev_module.h 00:04:22.569 TEST_HEADER include/spdk/ftl.h 00:04:22.569 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:22.569 TEST_HEADER include/spdk/gpt_spec.h 00:04:22.569 TEST_HEADER include/spdk/hexlify.h 00:04:22.569 CC test/thread/poller_perf/poller_perf.o 00:04:22.569 TEST_HEADER include/spdk/histogram_data.h 00:04:22.569 TEST_HEADER include/spdk/idxd.h 00:04:22.569 TEST_HEADER include/spdk/idxd_spec.h 00:04:22.569 TEST_HEADER include/spdk/init.h 00:04:22.569 TEST_HEADER include/spdk/ioat.h 00:04:22.569 TEST_HEADER include/spdk/ioat_spec.h 00:04:22.569 TEST_HEADER include/spdk/iscsi_spec.h 00:04:22.569 TEST_HEADER include/spdk/json.h 00:04:22.569 TEST_HEADER include/spdk/jsonrpc.h 00:04:22.569 TEST_HEADER include/spdk/keyring.h 00:04:22.569 CC examples/util/zipf/zipf.o 00:04:22.569 TEST_HEADER include/spdk/keyring_module.h 00:04:22.569 TEST_HEADER include/spdk/likely.h 00:04:22.569 TEST_HEADER include/spdk/log.h 00:04:22.569 TEST_HEADER include/spdk/lvol.h 00:04:22.569 TEST_HEADER include/spdk/md5.h 00:04:22.569 TEST_HEADER include/spdk/memory.h 00:04:22.569 TEST_HEADER include/spdk/mmio.h 00:04:22.569 TEST_HEADER include/spdk/nbd.h 00:04:22.569 TEST_HEADER include/spdk/net.h 00:04:22.569 TEST_HEADER include/spdk/notify.h 00:04:22.569 TEST_HEADER include/spdk/nvme.h 00:04:22.569 CC test/dma/test_dma/test_dma.o 00:04:22.569 TEST_HEADER include/spdk/nvme_intel.h 00:04:22.570 CC test/app/bdev_svc/bdev_svc.o 00:04:22.570 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:22.570 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:22.570 TEST_HEADER include/spdk/nvme_spec.h 00:04:22.570 TEST_HEADER include/spdk/nvme_zns.h 00:04:22.570 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:22.570 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:22.570 TEST_HEADER include/spdk/nvmf.h 00:04:22.570 TEST_HEADER include/spdk/nvmf_spec.h 00:04:22.570 TEST_HEADER include/spdk/nvmf_transport.h 00:04:22.570 TEST_HEADER include/spdk/opal.h 00:04:22.570 TEST_HEADER include/spdk/opal_spec.h 00:04:22.570 TEST_HEADER include/spdk/pci_ids.h 00:04:22.570 TEST_HEADER include/spdk/pipe.h 00:04:22.570 TEST_HEADER include/spdk/queue.h 00:04:22.570 TEST_HEADER include/spdk/reduce.h 00:04:22.570 TEST_HEADER include/spdk/rpc.h 00:04:22.570 TEST_HEADER include/spdk/scheduler.h 00:04:22.570 TEST_HEADER include/spdk/scsi.h 00:04:22.570 TEST_HEADER include/spdk/scsi_spec.h 00:04:22.570 TEST_HEADER include/spdk/sock.h 00:04:22.570 TEST_HEADER include/spdk/stdinc.h 00:04:22.570 TEST_HEADER include/spdk/string.h 00:04:22.570 TEST_HEADER include/spdk/thread.h 00:04:22.570 TEST_HEADER include/spdk/trace.h 00:04:22.570 TEST_HEADER include/spdk/trace_parser.h 00:04:22.570 TEST_HEADER include/spdk/tree.h 00:04:22.828 TEST_HEADER include/spdk/ublk.h 00:04:22.828 TEST_HEADER include/spdk/util.h 00:04:22.828 TEST_HEADER include/spdk/uuid.h 00:04:22.828 TEST_HEADER include/spdk/version.h 00:04:22.828 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:22.828 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:22.828 TEST_HEADER include/spdk/vhost.h 00:04:22.828 TEST_HEADER include/spdk/vmd.h 00:04:22.828 CC test/env/mem_callbacks/mem_callbacks.o 00:04:22.828 TEST_HEADER include/spdk/xor.h 00:04:22.828 TEST_HEADER include/spdk/zipf.h 00:04:22.828 CXX test/cpp_headers/accel.o 00:04:22.828 LINK nvmf_tgt 00:04:22.828 LINK zipf 00:04:22.828 LINK iscsi_tgt 00:04:22.828 LINK poller_perf 00:04:22.828 LINK spdk_trace_record 00:04:22.828 LINK bdev_svc 00:04:23.086 LINK mem_callbacks 00:04:23.086 CXX test/cpp_headers/accel_module.o 00:04:23.086 LINK spdk_trace 00:04:23.086 CC test/app/histogram_perf/histogram_perf.o 00:04:23.086 CC test/app/jsoncat/jsoncat.o 00:04:23.086 CC examples/ioat/perf/perf.o 00:04:23.086 CXX test/cpp_headers/assert.o 00:04:23.086 CC app/spdk_tgt/spdk_tgt.o 00:04:23.086 CC test/app/stub/stub.o 00:04:23.344 CC test/env/vtophys/vtophys.o 00:04:23.344 LINK test_dma 00:04:23.344 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:23.344 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:23.344 LINK jsoncat 00:04:23.344 LINK histogram_perf 00:04:23.344 CXX test/cpp_headers/barrier.o 00:04:23.344 LINK vtophys 00:04:23.344 LINK stub 00:04:23.344 LINK ioat_perf 00:04:23.344 LINK spdk_tgt 00:04:23.344 LINK env_dpdk_post_init 00:04:23.601 CC app/spdk_nvme_perf/perf.o 00:04:23.602 CC app/spdk_lspci/spdk_lspci.o 00:04:23.602 CXX test/cpp_headers/base64.o 00:04:23.602 CC app/spdk_nvme_identify/identify.o 00:04:23.602 CC app/spdk_nvme_discover/discovery_aer.o 00:04:23.602 CC examples/ioat/verify/verify.o 00:04:23.602 LINK nvme_fuzz 00:04:23.602 CC app/spdk_top/spdk_top.o 00:04:23.602 LINK spdk_lspci 00:04:23.602 CC test/env/memory/memory_ut.o 00:04:23.859 CXX test/cpp_headers/bdev.o 00:04:23.859 LINK spdk_nvme_discover 00:04:23.859 CC examples/vmd/lsvmd/lsvmd.o 00:04:23.859 LINK verify 00:04:23.859 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:23.859 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:23.859 CXX test/cpp_headers/bdev_module.o 00:04:24.118 LINK lsvmd 00:04:24.118 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:24.118 CC examples/vmd/led/led.o 00:04:24.118 CC app/vhost/vhost.o 00:04:24.118 CXX test/cpp_headers/bdev_zone.o 00:04:24.376 LINK led 00:04:24.376 CC app/spdk_dd/spdk_dd.o 00:04:24.376 LINK spdk_nvme_identify 00:04:24.376 CXX test/cpp_headers/bit_array.o 00:04:24.376 LINK vhost 00:04:24.376 LINK spdk_nvme_perf 00:04:24.634 LINK vhost_fuzz 00:04:24.634 LINK memory_ut 00:04:24.634 CXX test/cpp_headers/bit_pool.o 00:04:24.634 LINK spdk_top 00:04:24.634 CXX test/cpp_headers/blob_bdev.o 00:04:24.634 CXX test/cpp_headers/blobfs_bdev.o 00:04:24.634 CC examples/idxd/perf/perf.o 00:04:24.892 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:24.892 CXX test/cpp_headers/blobfs.o 00:04:24.892 CXX test/cpp_headers/blob.o 00:04:24.892 LINK spdk_dd 00:04:24.892 CC test/env/pci/pci_ut.o 00:04:24.892 CC app/fio/nvme/fio_plugin.o 00:04:24.892 CC examples/sock/hello_world/hello_sock.o 00:04:24.892 CC examples/thread/thread/thread_ex.o 00:04:25.149 CXX test/cpp_headers/conf.o 00:04:25.149 LINK interrupt_tgt 00:04:25.149 LINK idxd_perf 00:04:25.149 CXX test/cpp_headers/config.o 00:04:25.149 CXX test/cpp_headers/cpuset.o 00:04:25.149 CC app/fio/bdev/fio_plugin.o 00:04:25.149 LINK hello_sock 00:04:25.149 CXX test/cpp_headers/crc16.o 00:04:25.407 LINK thread 00:04:25.407 CXX test/cpp_headers/crc32.o 00:04:25.407 LINK pci_ut 00:04:25.407 CC test/rpc_client/rpc_client_test.o 00:04:25.407 CXX test/cpp_headers/crc64.o 00:04:25.407 CC test/accel/dif/dif.o 00:04:25.665 LINK rpc_client_test 00:04:25.665 LINK spdk_nvme 00:04:25.665 CXX test/cpp_headers/dif.o 00:04:25.665 CC examples/nvme/hello_world/hello_world.o 00:04:25.665 LINK iscsi_fuzz 00:04:25.665 CC test/event/event_perf/event_perf.o 00:04:25.665 CC test/blobfs/mkfs/mkfs.o 00:04:25.665 CC test/event/reactor/reactor.o 00:04:25.665 LINK spdk_bdev 00:04:25.665 CC test/event/reactor_perf/reactor_perf.o 00:04:25.924 CXX test/cpp_headers/dma.o 00:04:25.924 CXX test/cpp_headers/endian.o 00:04:25.924 CXX test/cpp_headers/env_dpdk.o 00:04:25.924 LINK event_perf 00:04:25.924 LINK hello_world 00:04:25.924 LINK reactor 00:04:25.924 LINK reactor_perf 00:04:25.924 CC examples/accel/perf/accel_perf.o 00:04:25.924 LINK mkfs 00:04:26.182 CC examples/nvme/reconnect/reconnect.o 00:04:26.182 CXX test/cpp_headers/env.o 00:04:26.182 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:26.182 CC examples/nvme/arbitration/arbitration.o 00:04:26.182 CC examples/nvme/hotplug/hotplug.o 00:04:26.182 LINK dif 00:04:26.182 CC test/event/app_repeat/app_repeat.o 00:04:26.182 CXX test/cpp_headers/event.o 00:04:26.440 CC test/event/scheduler/scheduler.o 00:04:26.440 CC test/lvol/esnap/esnap.o 00:04:26.440 CXX test/cpp_headers/fd_group.o 00:04:26.440 LINK app_repeat 00:04:26.440 LINK hotplug 00:04:26.440 LINK reconnect 00:04:26.440 LINK accel_perf 00:04:26.440 LINK arbitration 00:04:26.699 LINK scheduler 00:04:26.699 CXX test/cpp_headers/fd.o 00:04:26.699 CXX test/cpp_headers/file.o 00:04:26.699 CXX test/cpp_headers/fsdev.o 00:04:26.699 CXX test/cpp_headers/fsdev_module.o 00:04:26.699 LINK nvme_manage 00:04:26.699 CC examples/blob/hello_world/hello_blob.o 00:04:26.699 CXX test/cpp_headers/ftl.o 00:04:26.957 CXX test/cpp_headers/fuse_dispatcher.o 00:04:26.957 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:26.958 CXX test/cpp_headers/gpt_spec.o 00:04:26.958 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:26.958 CC examples/nvme/abort/abort.o 00:04:26.958 LINK hello_blob 00:04:26.958 CC test/nvme/aer/aer.o 00:04:26.958 CC examples/bdev/hello_world/hello_bdev.o 00:04:26.958 CC test/nvme/reset/reset.o 00:04:27.216 CXX test/cpp_headers/hexlify.o 00:04:27.216 CC test/nvme/sgl/sgl.o 00:04:27.216 LINK cmb_copy 00:04:27.216 LINK hello_fsdev 00:04:27.216 CXX test/cpp_headers/histogram_data.o 00:04:27.216 LINK hello_bdev 00:04:27.216 CC examples/blob/cli/blobcli.o 00:04:27.216 LINK aer 00:04:27.474 LINK reset 00:04:27.474 LINK abort 00:04:27.474 CC test/nvme/e2edp/nvme_dp.o 00:04:27.474 CXX test/cpp_headers/idxd.o 00:04:27.474 LINK sgl 00:04:27.474 CC test/nvme/overhead/overhead.o 00:04:27.732 CXX test/cpp_headers/idxd_spec.o 00:04:27.732 CC test/nvme/err_injection/err_injection.o 00:04:27.732 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:27.732 CC examples/bdev/bdevperf/bdevperf.o 00:04:27.732 CC test/nvme/startup/startup.o 00:04:27.732 LINK nvme_dp 00:04:27.732 CC test/bdev/bdevio/bdevio.o 00:04:27.732 CXX test/cpp_headers/init.o 00:04:27.732 LINK overhead 00:04:28.041 LINK blobcli 00:04:28.042 LINK pmr_persistence 00:04:28.042 LINK err_injection 00:04:28.042 LINK startup 00:04:28.042 CXX test/cpp_headers/ioat.o 00:04:28.042 CC test/nvme/reserve/reserve.o 00:04:28.315 CC test/nvme/simple_copy/simple_copy.o 00:04:28.315 CC test/nvme/connect_stress/connect_stress.o 00:04:28.315 CXX test/cpp_headers/ioat_spec.o 00:04:28.315 CC test/nvme/boot_partition/boot_partition.o 00:04:28.315 CC test/nvme/compliance/nvme_compliance.o 00:04:28.315 LINK bdevio 00:04:28.315 CC test/nvme/fused_ordering/fused_ordering.o 00:04:28.315 LINK reserve 00:04:28.315 CXX test/cpp_headers/iscsi_spec.o 00:04:28.315 LINK connect_stress 00:04:28.315 LINK boot_partition 00:04:28.315 LINK simple_copy 00:04:28.574 CXX test/cpp_headers/json.o 00:04:28.574 LINK fused_ordering 00:04:28.574 LINK bdevperf 00:04:28.574 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:28.574 CXX test/cpp_headers/jsonrpc.o 00:04:28.574 CXX test/cpp_headers/keyring.o 00:04:28.574 LINK nvme_compliance 00:04:28.574 CXX test/cpp_headers/keyring_module.o 00:04:28.574 CC test/nvme/fdp/fdp.o 00:04:28.574 CC test/nvme/cuse/cuse.o 00:04:28.862 CXX test/cpp_headers/likely.o 00:04:28.862 CXX test/cpp_headers/log.o 00:04:28.862 LINK doorbell_aers 00:04:28.862 CXX test/cpp_headers/lvol.o 00:04:28.862 CXX test/cpp_headers/md5.o 00:04:28.862 CXX test/cpp_headers/memory.o 00:04:28.862 CXX test/cpp_headers/mmio.o 00:04:28.862 CXX test/cpp_headers/nbd.o 00:04:29.120 CXX test/cpp_headers/net.o 00:04:29.120 CXX test/cpp_headers/notify.o 00:04:29.120 CXX test/cpp_headers/nvme.o 00:04:29.120 CC examples/nvmf/nvmf/nvmf.o 00:04:29.120 LINK fdp 00:04:29.120 CXX test/cpp_headers/nvme_intel.o 00:04:29.120 CXX test/cpp_headers/nvme_ocssd.o 00:04:29.120 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:29.120 CXX test/cpp_headers/nvme_spec.o 00:04:29.120 CXX test/cpp_headers/nvme_zns.o 00:04:29.378 CXX test/cpp_headers/nvmf_cmd.o 00:04:29.378 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:29.378 CXX test/cpp_headers/nvmf.o 00:04:29.378 CXX test/cpp_headers/nvmf_spec.o 00:04:29.378 CXX test/cpp_headers/nvmf_transport.o 00:04:29.378 CXX test/cpp_headers/opal.o 00:04:29.378 LINK nvmf 00:04:29.378 CXX test/cpp_headers/opal_spec.o 00:04:29.378 CXX test/cpp_headers/pci_ids.o 00:04:29.378 CXX test/cpp_headers/pipe.o 00:04:29.378 CXX test/cpp_headers/queue.o 00:04:29.636 CXX test/cpp_headers/reduce.o 00:04:29.636 CXX test/cpp_headers/rpc.o 00:04:29.636 CXX test/cpp_headers/scheduler.o 00:04:29.636 CXX test/cpp_headers/scsi.o 00:04:29.636 CXX test/cpp_headers/scsi_spec.o 00:04:29.636 CXX test/cpp_headers/sock.o 00:04:29.636 CXX test/cpp_headers/stdinc.o 00:04:29.636 CXX test/cpp_headers/string.o 00:04:29.636 CXX test/cpp_headers/thread.o 00:04:29.636 CXX test/cpp_headers/trace.o 00:04:29.894 CXX test/cpp_headers/trace_parser.o 00:04:29.894 CXX test/cpp_headers/tree.o 00:04:29.894 CXX test/cpp_headers/ublk.o 00:04:29.894 CXX test/cpp_headers/util.o 00:04:29.894 CXX test/cpp_headers/uuid.o 00:04:29.894 CXX test/cpp_headers/version.o 00:04:29.894 CXX test/cpp_headers/vfio_user_pci.o 00:04:29.894 CXX test/cpp_headers/vfio_user_spec.o 00:04:29.894 CXX test/cpp_headers/vhost.o 00:04:29.894 CXX test/cpp_headers/vmd.o 00:04:29.894 CXX test/cpp_headers/xor.o 00:04:30.153 CXX test/cpp_headers/zipf.o 00:04:30.153 LINK cuse 00:04:32.054 LINK esnap 00:04:32.312 00:04:32.312 real 1m24.601s 00:04:32.312 user 6m57.178s 00:04:32.312 sys 1m8.565s 00:04:32.312 01:45:42 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:32.312 01:45:42 make -- common/autotest_common.sh@10 -- $ set +x 00:04:32.312 ************************************ 00:04:32.312 END TEST make 00:04:32.312 ************************************ 00:04:32.312 01:45:42 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:32.312 01:45:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:32.312 01:45:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:32.312 01:45:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:32.312 01:45:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:32.312 01:45:42 -- pm/common@44 -- $ pid=6045 00:04:32.312 01:45:42 -- pm/common@50 -- $ kill -TERM 6045 00:04:32.312 01:45:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:32.312 01:45:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:32.312 01:45:42 -- pm/common@44 -- $ pid=6047 00:04:32.312 01:45:42 -- pm/common@50 -- $ kill -TERM 6047 00:04:32.312 01:45:42 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:32.312 01:45:42 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:32.312 01:45:42 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:32.312 01:45:42 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:32.312 01:45:42 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:32.570 01:45:42 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:32.571 01:45:42 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.571 01:45:42 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.571 01:45:42 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.571 01:45:42 -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.571 01:45:42 -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.571 01:45:42 -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.571 01:45:42 -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.571 01:45:42 -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.571 01:45:42 -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.571 01:45:42 -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.571 01:45:42 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.571 01:45:42 -- scripts/common.sh@344 -- # case "$op" in 00:04:32.571 01:45:42 -- scripts/common.sh@345 -- # : 1 00:04:32.571 01:45:42 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.571 01:45:42 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.571 01:45:42 -- scripts/common.sh@365 -- # decimal 1 00:04:32.571 01:45:42 -- scripts/common.sh@353 -- # local d=1 00:04:32.571 01:45:42 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.571 01:45:42 -- scripts/common.sh@355 -- # echo 1 00:04:32.571 01:45:42 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.571 01:45:42 -- scripts/common.sh@366 -- # decimal 2 00:04:32.571 01:45:42 -- scripts/common.sh@353 -- # local d=2 00:04:32.571 01:45:42 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.571 01:45:42 -- scripts/common.sh@355 -- # echo 2 00:04:32.571 01:45:42 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.571 01:45:42 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.571 01:45:42 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.571 01:45:42 -- scripts/common.sh@368 -- # return 0 00:04:32.571 01:45:42 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.571 01:45:42 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:32.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.571 --rc genhtml_branch_coverage=1 00:04:32.571 --rc genhtml_function_coverage=1 00:04:32.571 --rc genhtml_legend=1 00:04:32.571 --rc geninfo_all_blocks=1 00:04:32.571 --rc geninfo_unexecuted_blocks=1 00:04:32.571 00:04:32.571 ' 00:04:32.571 01:45:42 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:32.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.571 --rc genhtml_branch_coverage=1 00:04:32.571 --rc genhtml_function_coverage=1 00:04:32.571 --rc genhtml_legend=1 00:04:32.571 --rc geninfo_all_blocks=1 00:04:32.571 --rc geninfo_unexecuted_blocks=1 00:04:32.571 00:04:32.571 ' 00:04:32.571 01:45:42 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:32.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.571 --rc genhtml_branch_coverage=1 00:04:32.571 --rc genhtml_function_coverage=1 00:04:32.571 --rc genhtml_legend=1 00:04:32.571 --rc geninfo_all_blocks=1 00:04:32.571 --rc geninfo_unexecuted_blocks=1 00:04:32.571 00:04:32.571 ' 00:04:32.571 01:45:42 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:32.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.571 --rc genhtml_branch_coverage=1 00:04:32.571 --rc genhtml_function_coverage=1 00:04:32.571 --rc genhtml_legend=1 00:04:32.571 --rc geninfo_all_blocks=1 00:04:32.571 --rc geninfo_unexecuted_blocks=1 00:04:32.571 00:04:32.571 ' 00:04:32.571 01:45:42 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:32.571 01:45:42 -- nvmf/common.sh@7 -- # uname -s 00:04:32.571 01:45:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:32.571 01:45:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:32.571 01:45:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:32.571 01:45:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:32.571 01:45:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:32.571 01:45:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:32.571 01:45:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:32.571 01:45:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:32.571 01:45:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:32.571 01:45:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:32.571 01:45:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:04:32.571 01:45:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:04:32.571 01:45:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:32.571 01:45:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:32.571 01:45:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:32.571 01:45:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:32.571 01:45:42 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:32.571 01:45:42 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:32.571 01:45:42 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:32.571 01:45:42 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:32.571 01:45:42 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:32.571 01:45:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.571 01:45:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.571 01:45:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.571 01:45:42 -- paths/export.sh@5 -- # export PATH 00:04:32.571 01:45:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.571 01:45:42 -- nvmf/common.sh@51 -- # : 0 00:04:32.571 01:45:42 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:32.571 01:45:42 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:32.571 01:45:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:32.571 01:45:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:32.571 01:45:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:32.571 01:45:42 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:32.571 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:32.571 01:45:42 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:32.571 01:45:42 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:32.571 01:45:42 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:32.571 01:45:42 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:32.571 01:45:42 -- spdk/autotest.sh@32 -- # uname -s 00:04:32.571 01:45:42 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:32.571 01:45:42 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:32.571 01:45:42 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:32.571 01:45:43 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:32.571 01:45:43 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:32.571 01:45:43 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:32.571 01:45:43 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:32.571 01:45:43 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:32.571 01:45:43 -- spdk/autotest.sh@48 -- # udevadm_pid=66631 00:04:32.571 01:45:43 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:32.571 01:45:43 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:32.571 01:45:43 -- pm/common@17 -- # local monitor 00:04:32.571 01:45:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:32.571 01:45:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:32.571 01:45:43 -- pm/common@25 -- # sleep 1 00:04:32.571 01:45:43 -- pm/common@21 -- # date +%s 00:04:32.571 01:45:43 -- pm/common@21 -- # date +%s 00:04:32.571 01:45:43 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731980743 00:04:32.571 01:45:43 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731980743 00:04:32.571 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731980743_collect-cpu-load.pm.log 00:04:32.571 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731980743_collect-vmstat.pm.log 00:04:33.506 01:45:44 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:33.506 01:45:44 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:33.506 01:45:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:33.506 01:45:44 -- common/autotest_common.sh@10 -- # set +x 00:04:33.506 01:45:44 -- spdk/autotest.sh@59 -- # create_test_list 00:04:33.506 01:45:44 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:33.506 01:45:44 -- common/autotest_common.sh@10 -- # set +x 00:04:33.506 01:45:44 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:33.506 01:45:44 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:33.506 01:45:44 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:33.506 01:45:44 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:33.765 01:45:44 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:33.765 01:45:44 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:33.765 01:45:44 -- common/autotest_common.sh@1457 -- # uname 00:04:33.765 01:45:44 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:33.765 01:45:44 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:33.765 01:45:44 -- common/autotest_common.sh@1477 -- # uname 00:04:33.765 01:45:44 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:33.765 01:45:44 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:33.765 01:45:44 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:33.765 lcov: LCOV version 1.15 00:04:33.765 01:45:44 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:51.871 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:51.871 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:06.744 01:46:17 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:06.744 01:46:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:06.744 01:46:17 -- common/autotest_common.sh@10 -- # set +x 00:05:06.744 01:46:17 -- spdk/autotest.sh@78 -- # rm -f 00:05:06.744 01:46:17 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:07.311 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:07.311 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:07.311 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:07.311 01:46:17 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:07.311 01:46:17 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:07.311 01:46:17 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:07.311 01:46:17 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:07.311 01:46:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:07.311 01:46:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:07.311 01:46:17 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:07.311 01:46:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:07.311 01:46:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:07.311 01:46:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:07.311 01:46:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:07.311 01:46:17 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:07.311 01:46:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:07.312 01:46:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:07.312 01:46:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:07.312 01:46:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:05:07.312 01:46:17 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:05:07.312 01:46:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:07.312 01:46:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:07.312 01:46:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:07.312 01:46:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:05:07.312 01:46:17 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:05:07.312 01:46:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:07.312 01:46:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:07.312 01:46:17 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:07.312 01:46:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:07.312 01:46:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:07.312 01:46:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:07.312 01:46:17 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:07.312 01:46:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:07.312 No valid GPT data, bailing 00:05:07.312 01:46:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:07.312 01:46:17 -- scripts/common.sh@394 -- # pt= 00:05:07.312 01:46:17 -- scripts/common.sh@395 -- # return 1 00:05:07.312 01:46:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:07.312 1+0 records in 00:05:07.312 1+0 records out 00:05:07.312 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00455622 s, 230 MB/s 00:05:07.312 01:46:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:07.312 01:46:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:07.312 01:46:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:07.312 01:46:17 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:07.312 01:46:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:07.312 No valid GPT data, bailing 00:05:07.312 01:46:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:07.312 01:46:17 -- scripts/common.sh@394 -- # pt= 00:05:07.312 01:46:17 -- scripts/common.sh@395 -- # return 1 00:05:07.312 01:46:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:07.312 1+0 records in 00:05:07.312 1+0 records out 00:05:07.312 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00365638 s, 287 MB/s 00:05:07.312 01:46:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:07.312 01:46:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:07.312 01:46:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:07.312 01:46:17 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:07.312 01:46:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:07.573 No valid GPT data, bailing 00:05:07.573 01:46:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:07.573 01:46:17 -- scripts/common.sh@394 -- # pt= 00:05:07.573 01:46:17 -- scripts/common.sh@395 -- # return 1 00:05:07.573 01:46:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:07.573 1+0 records in 00:05:07.573 1+0 records out 00:05:07.573 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00386801 s, 271 MB/s 00:05:07.573 01:46:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:07.573 01:46:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:07.573 01:46:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:07.573 01:46:17 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:07.573 01:46:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:07.573 No valid GPT data, bailing 00:05:07.573 01:46:18 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:07.573 01:46:18 -- scripts/common.sh@394 -- # pt= 00:05:07.573 01:46:18 -- scripts/common.sh@395 -- # return 1 00:05:07.573 01:46:18 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:07.573 1+0 records in 00:05:07.573 1+0 records out 00:05:07.573 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00418642 s, 250 MB/s 00:05:07.573 01:46:18 -- spdk/autotest.sh@105 -- # sync 00:05:07.573 01:46:18 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:07.573 01:46:18 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:07.573 01:46:18 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:09.522 01:46:20 -- spdk/autotest.sh@111 -- # uname -s 00:05:09.522 01:46:20 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:09.522 01:46:20 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:09.522 01:46:20 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:10.456 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:10.456 Hugepages 00:05:10.456 node hugesize free / total 00:05:10.456 node0 1048576kB 0 / 0 00:05:10.456 node0 2048kB 0 / 0 00:05:10.456 00:05:10.456 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:10.456 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:10.456 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:10.456 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:10.456 01:46:20 -- spdk/autotest.sh@117 -- # uname -s 00:05:10.456 01:46:20 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:10.456 01:46:20 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:10.456 01:46:20 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:11.390 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:11.390 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:11.390 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:11.390 01:46:21 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:12.327 01:46:22 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:12.327 01:46:22 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:12.327 01:46:22 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:12.327 01:46:22 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:12.327 01:46:22 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:12.327 01:46:22 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:12.327 01:46:22 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:12.327 01:46:22 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:12.327 01:46:22 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:12.327 01:46:22 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:12.327 01:46:22 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:12.327 01:46:22 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:12.894 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:12.894 Waiting for block devices as requested 00:05:12.894 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:12.894 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:12.894 01:46:23 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:12.894 01:46:23 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:12.894 01:46:23 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:12.894 01:46:23 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:12.894 01:46:23 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:12.894 01:46:23 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:12.894 01:46:23 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:13.152 01:46:23 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:13.152 01:46:23 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:13.152 01:46:23 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:13.152 01:46:23 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:13.152 01:46:23 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:13.152 01:46:23 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:13.152 01:46:23 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:13.152 01:46:23 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:13.152 01:46:23 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:13.152 01:46:23 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:13.152 01:46:23 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:13.152 01:46:23 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:13.152 01:46:23 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:13.152 01:46:23 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:13.152 01:46:23 -- common/autotest_common.sh@1543 -- # continue 00:05:13.152 01:46:23 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:13.152 01:46:23 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:13.152 01:46:23 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:13.152 01:46:23 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:13.152 01:46:23 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:13.152 01:46:23 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:13.152 01:46:23 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:13.152 01:46:23 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:13.152 01:46:23 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:13.152 01:46:23 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:13.152 01:46:23 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:13.152 01:46:23 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:13.152 01:46:23 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:13.152 01:46:23 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:13.152 01:46:23 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:13.152 01:46:23 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:13.152 01:46:23 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:13.152 01:46:23 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:13.153 01:46:23 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:13.153 01:46:23 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:13.153 01:46:23 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:13.153 01:46:23 -- common/autotest_common.sh@1543 -- # continue 00:05:13.153 01:46:23 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:13.153 01:46:23 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:13.153 01:46:23 -- common/autotest_common.sh@10 -- # set +x 00:05:13.153 01:46:23 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:13.153 01:46:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:13.153 01:46:23 -- common/autotest_common.sh@10 -- # set +x 00:05:13.153 01:46:23 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:13.720 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:13.979 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.979 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:13.979 01:46:24 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:13.979 01:46:24 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:13.979 01:46:24 -- common/autotest_common.sh@10 -- # set +x 00:05:13.979 01:46:24 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:13.979 01:46:24 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:13.979 01:46:24 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:13.979 01:46:24 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:13.979 01:46:24 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:13.979 01:46:24 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:13.979 01:46:24 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:13.979 01:46:24 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:13.979 01:46:24 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:13.979 01:46:24 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:13.979 01:46:24 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:13.979 01:46:24 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:13.979 01:46:24 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:13.979 01:46:24 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:13.979 01:46:24 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:13.979 01:46:24 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:13.979 01:46:24 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:13.979 01:46:24 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:13.979 01:46:24 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:13.979 01:46:24 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:13.979 01:46:24 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:13.979 01:46:24 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:13.979 01:46:24 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:13.979 01:46:24 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:13.979 01:46:24 -- common/autotest_common.sh@1572 -- # return 0 00:05:13.979 01:46:24 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:13.979 01:46:24 -- common/autotest_common.sh@1580 -- # return 0 00:05:13.979 01:46:24 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:13.979 01:46:24 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:13.979 01:46:24 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:13.979 01:46:24 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:13.979 01:46:24 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:13.979 01:46:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:14.238 01:46:24 -- common/autotest_common.sh@10 -- # set +x 00:05:14.238 01:46:24 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:05:14.238 01:46:24 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:05:14.238 01:46:24 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:05:14.238 01:46:24 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:14.238 01:46:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.238 01:46:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.238 01:46:24 -- common/autotest_common.sh@10 -- # set +x 00:05:14.238 ************************************ 00:05:14.238 START TEST env 00:05:14.238 ************************************ 00:05:14.238 01:46:24 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:14.238 * Looking for test storage... 00:05:14.238 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:14.238 01:46:24 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:14.238 01:46:24 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:14.238 01:46:24 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:14.238 01:46:24 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:14.238 01:46:24 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.238 01:46:24 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.238 01:46:24 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.238 01:46:24 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.238 01:46:24 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.238 01:46:24 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.238 01:46:24 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.238 01:46:24 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.238 01:46:24 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.238 01:46:24 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.238 01:46:24 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.238 01:46:24 env -- scripts/common.sh@344 -- # case "$op" in 00:05:14.238 01:46:24 env -- scripts/common.sh@345 -- # : 1 00:05:14.238 01:46:24 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.239 01:46:24 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.239 01:46:24 env -- scripts/common.sh@365 -- # decimal 1 00:05:14.239 01:46:24 env -- scripts/common.sh@353 -- # local d=1 00:05:14.239 01:46:24 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.239 01:46:24 env -- scripts/common.sh@355 -- # echo 1 00:05:14.239 01:46:24 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.239 01:46:24 env -- scripts/common.sh@366 -- # decimal 2 00:05:14.239 01:46:24 env -- scripts/common.sh@353 -- # local d=2 00:05:14.239 01:46:24 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.239 01:46:24 env -- scripts/common.sh@355 -- # echo 2 00:05:14.239 01:46:24 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.239 01:46:24 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.239 01:46:24 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.239 01:46:24 env -- scripts/common.sh@368 -- # return 0 00:05:14.239 01:46:24 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.239 01:46:24 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:14.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.239 --rc genhtml_branch_coverage=1 00:05:14.239 --rc genhtml_function_coverage=1 00:05:14.239 --rc genhtml_legend=1 00:05:14.239 --rc geninfo_all_blocks=1 00:05:14.239 --rc geninfo_unexecuted_blocks=1 00:05:14.239 00:05:14.239 ' 00:05:14.239 01:46:24 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:14.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.239 --rc genhtml_branch_coverage=1 00:05:14.239 --rc genhtml_function_coverage=1 00:05:14.239 --rc genhtml_legend=1 00:05:14.239 --rc geninfo_all_blocks=1 00:05:14.239 --rc geninfo_unexecuted_blocks=1 00:05:14.239 00:05:14.239 ' 00:05:14.239 01:46:24 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:14.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.239 --rc genhtml_branch_coverage=1 00:05:14.239 --rc genhtml_function_coverage=1 00:05:14.239 --rc genhtml_legend=1 00:05:14.239 --rc geninfo_all_blocks=1 00:05:14.239 --rc geninfo_unexecuted_blocks=1 00:05:14.239 00:05:14.239 ' 00:05:14.239 01:46:24 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:14.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.239 --rc genhtml_branch_coverage=1 00:05:14.239 --rc genhtml_function_coverage=1 00:05:14.239 --rc genhtml_legend=1 00:05:14.239 --rc geninfo_all_blocks=1 00:05:14.239 --rc geninfo_unexecuted_blocks=1 00:05:14.239 00:05:14.239 ' 00:05:14.239 01:46:24 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:14.239 01:46:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.239 01:46:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.239 01:46:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:14.239 ************************************ 00:05:14.239 START TEST env_memory 00:05:14.239 ************************************ 00:05:14.239 01:46:24 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:14.239 00:05:14.239 00:05:14.239 CUnit - A unit testing framework for C - Version 2.1-3 00:05:14.239 http://cunit.sourceforge.net/ 00:05:14.239 00:05:14.239 00:05:14.239 Suite: memory 00:05:14.498 Test: alloc and free memory map ...[2024-11-19 01:46:24.862745] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:14.498 passed 00:05:14.498 Test: mem map translation ...[2024-11-19 01:46:24.893498] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:14.498 [2024-11-19 01:46:24.893543] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:14.498 [2024-11-19 01:46:24.893602] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:14.498 [2024-11-19 01:46:24.893612] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:14.498 passed 00:05:14.498 Test: mem map registration ...[2024-11-19 01:46:24.957330] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:14.498 [2024-11-19 01:46:24.957375] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:14.498 passed 00:05:14.498 Test: mem map adjacent registrations ...passed 00:05:14.498 00:05:14.498 Run Summary: Type Total Ran Passed Failed Inactive 00:05:14.498 suites 1 1 n/a 0 0 00:05:14.498 tests 4 4 4 0 0 00:05:14.498 asserts 152 152 152 0 n/a 00:05:14.498 00:05:14.498 Elapsed time = 0.213 seconds 00:05:14.498 00:05:14.498 real 0m0.232s 00:05:14.498 user 0m0.211s 00:05:14.498 sys 0m0.017s 00:05:14.498 01:46:25 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.498 01:46:25 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:14.498 ************************************ 00:05:14.498 END TEST env_memory 00:05:14.498 ************************************ 00:05:14.498 01:46:25 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:14.498 01:46:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.498 01:46:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.498 01:46:25 env -- common/autotest_common.sh@10 -- # set +x 00:05:14.498 ************************************ 00:05:14.498 START TEST env_vtophys 00:05:14.498 ************************************ 00:05:14.498 01:46:25 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:14.757 EAL: lib.eal log level changed from notice to debug 00:05:14.757 EAL: Detected lcore 0 as core 0 on socket 0 00:05:14.757 EAL: Detected lcore 1 as core 0 on socket 0 00:05:14.757 EAL: Detected lcore 2 as core 0 on socket 0 00:05:14.757 EAL: Detected lcore 3 as core 0 on socket 0 00:05:14.757 EAL: Detected lcore 4 as core 0 on socket 0 00:05:14.757 EAL: Detected lcore 5 as core 0 on socket 0 00:05:14.757 EAL: Detected lcore 6 as core 0 on socket 0 00:05:14.757 EAL: Detected lcore 7 as core 0 on socket 0 00:05:14.757 EAL: Detected lcore 8 as core 0 on socket 0 00:05:14.757 EAL: Detected lcore 9 as core 0 on socket 0 00:05:14.757 EAL: Maximum logical cores by configuration: 128 00:05:14.757 EAL: Detected CPU lcores: 10 00:05:14.757 EAL: Detected NUMA nodes: 1 00:05:14.757 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:14.757 EAL: Detected shared linkage of DPDK 00:05:14.757 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:14.757 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:14.757 EAL: Registered [vdev] bus. 00:05:14.757 EAL: bus.vdev log level changed from disabled to notice 00:05:14.757 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:14.757 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:14.757 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:14.757 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:14.757 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:14.757 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:14.757 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:14.757 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:14.757 EAL: No shared files mode enabled, IPC will be disabled 00:05:14.757 EAL: No shared files mode enabled, IPC is disabled 00:05:14.757 EAL: Selected IOVA mode 'PA' 00:05:14.757 EAL: Probing VFIO support... 00:05:14.757 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:14.757 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:14.757 EAL: Ask a virtual area of 0x2e000 bytes 00:05:14.757 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:14.757 EAL: Setting up physically contiguous memory... 00:05:14.757 EAL: Setting maximum number of open files to 524288 00:05:14.757 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:14.757 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:14.757 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.757 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:14.757 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:14.757 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.758 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:14.758 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:14.758 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.758 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:14.758 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:14.758 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.758 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:14.758 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:14.758 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.758 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:14.758 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:14.758 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.758 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:14.758 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:14.758 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.758 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:14.758 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:14.758 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.758 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:14.758 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:14.758 EAL: Hugepages will be freed exactly as allocated. 00:05:14.758 EAL: No shared files mode enabled, IPC is disabled 00:05:14.758 EAL: No shared files mode enabled, IPC is disabled 00:05:14.758 EAL: TSC frequency is ~2200000 KHz 00:05:14.758 EAL: Main lcore 0 is ready (tid=7fc546e58a00;cpuset=[0]) 00:05:14.758 EAL: Trying to obtain current memory policy. 00:05:14.758 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.758 EAL: Restoring previous memory policy: 0 00:05:14.758 EAL: request: mp_malloc_sync 00:05:14.758 EAL: No shared files mode enabled, IPC is disabled 00:05:14.758 EAL: Heap on socket 0 was expanded by 2MB 00:05:14.758 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:14.758 EAL: No shared files mode enabled, IPC is disabled 00:05:14.758 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:14.758 EAL: Mem event callback 'spdk:(nil)' registered 00:05:14.758 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:14.758 00:05:14.758 00:05:14.758 CUnit - A unit testing framework for C - Version 2.1-3 00:05:14.758 http://cunit.sourceforge.net/ 00:05:14.758 00:05:14.758 00:05:14.758 Suite: components_suite 00:05:14.758 Test: vtophys_malloc_test ...passed 00:05:14.758 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:14.758 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.758 EAL: Restoring previous memory policy: 4 00:05:14.758 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.758 EAL: request: mp_malloc_sync 00:05:14.758 EAL: No shared files mode enabled, IPC is disabled 00:05:14.758 EAL: Heap on socket 0 was expanded by 4MB 00:05:14.758 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.758 EAL: request: mp_malloc_sync 00:05:14.758 EAL: No shared files mode enabled, IPC is disabled 00:05:14.758 EAL: Heap on socket 0 was shrunk by 4MB 00:05:14.758 EAL: Trying to obtain current memory policy. 00:05:14.758 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.758 EAL: Restoring previous memory policy: 4 00:05:14.758 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.758 EAL: request: mp_malloc_sync 00:05:14.758 EAL: No shared files mode enabled, IPC is disabled 00:05:14.758 EAL: Heap on socket 0 was expanded by 6MB 00:05:14.758 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.758 EAL: request: mp_malloc_sync 00:05:14.758 EAL: No shared files mode enabled, IPC is disabled 00:05:14.758 EAL: Heap on socket 0 was shrunk by 6MB 00:05:14.758 EAL: Trying to obtain current memory policy. 00:05:14.758 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.758 EAL: Restoring previous memory policy: 4 00:05:14.758 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.758 EAL: request: mp_malloc_sync 00:05:14.758 EAL: No shared files mode enabled, IPC is disabled 00:05:14.758 EAL: Heap on socket 0 was expanded by 10MB 00:05:14.758 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.758 EAL: request: mp_malloc_sync 00:05:14.758 EAL: No shared files mode enabled, IPC is disabled 00:05:14.758 EAL: Heap on socket 0 was shrunk by 10MB 00:05:14.758 EAL: Trying to obtain current memory policy. 00:05:14.758 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.758 EAL: Restoring previous memory policy: 4 00:05:14.758 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.758 EAL: request: mp_malloc_sync 00:05:14.758 EAL: No shared files mode enabled, IPC is disabled 00:05:14.758 EAL: Heap on socket 0 was expanded by 18MB 00:05:14.758 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.758 EAL: request: mp_malloc_sync 00:05:14.758 EAL: No shared files mode enabled, IPC is disabled 00:05:14.758 EAL: Heap on socket 0 was shrunk by 18MB 00:05:14.758 EAL: Trying to obtain current memory policy. 00:05:14.758 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.758 EAL: Restoring previous memory policy: 4 00:05:14.758 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.758 EAL: request: mp_malloc_sync 00:05:14.758 EAL: No shared files mode enabled, IPC is disabled 00:05:14.758 EAL: Heap on socket 0 was expanded by 34MB 00:05:14.758 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.758 EAL: request: mp_malloc_sync 00:05:14.758 EAL: No shared files mode enabled, IPC is disabled 00:05:14.758 EAL: Heap on socket 0 was shrunk by 34MB 00:05:14.758 EAL: Trying to obtain current memory policy. 00:05:14.758 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.758 EAL: Restoring previous memory policy: 4 00:05:14.758 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.758 EAL: request: mp_malloc_sync 00:05:14.758 EAL: No shared files mode enabled, IPC is disabled 00:05:14.758 EAL: Heap on socket 0 was expanded by 66MB 00:05:14.758 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.758 EAL: request: mp_malloc_sync 00:05:14.758 EAL: No shared files mode enabled, IPC is disabled 00:05:14.758 EAL: Heap on socket 0 was shrunk by 66MB 00:05:14.758 EAL: Trying to obtain current memory policy. 00:05:14.758 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.758 EAL: Restoring previous memory policy: 4 00:05:14.758 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.758 EAL: request: mp_malloc_sync 00:05:14.758 EAL: No shared files mode enabled, IPC is disabled 00:05:14.758 EAL: Heap on socket 0 was expanded by 130MB 00:05:14.758 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.758 EAL: request: mp_malloc_sync 00:05:14.758 EAL: No shared files mode enabled, IPC is disabled 00:05:14.758 EAL: Heap on socket 0 was shrunk by 130MB 00:05:14.758 EAL: Trying to obtain current memory policy. 00:05:14.758 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.017 EAL: Restoring previous memory policy: 4 00:05:15.017 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.017 EAL: request: mp_malloc_sync 00:05:15.017 EAL: No shared files mode enabled, IPC is disabled 00:05:15.017 EAL: Heap on socket 0 was expanded by 258MB 00:05:15.017 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.017 EAL: request: mp_malloc_sync 00:05:15.018 EAL: No shared files mode enabled, IPC is disabled 00:05:15.018 EAL: Heap on socket 0 was shrunk by 258MB 00:05:15.018 EAL: Trying to obtain current memory policy. 00:05:15.018 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.018 EAL: Restoring previous memory policy: 4 00:05:15.018 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.018 EAL: request: mp_malloc_sync 00:05:15.018 EAL: No shared files mode enabled, IPC is disabled 00:05:15.018 EAL: Heap on socket 0 was expanded by 514MB 00:05:15.018 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.277 EAL: request: mp_malloc_sync 00:05:15.277 EAL: No shared files mode enabled, IPC is disabled 00:05:15.277 EAL: Heap on socket 0 was shrunk by 514MB 00:05:15.277 EAL: Trying to obtain current memory policy. 00:05:15.277 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.277 EAL: Restoring previous memory policy: 4 00:05:15.277 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.277 EAL: request: mp_malloc_sync 00:05:15.277 EAL: No shared files mode enabled, IPC is disabled 00:05:15.277 EAL: Heap on socket 0 was expanded by 1026MB 00:05:15.277 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.536 passed 00:05:15.536 00:05:15.536 Run Summary: Type Total Ran Passed Failed Inactive 00:05:15.536 suites 1 1 n/a 0 0 00:05:15.536 tests 2 2 2 0 0 00:05:15.536 asserts 5624 5624 5624 0 n/a 00:05:15.536 00:05:15.536 Elapsed time = 0.666 seconds 00:05:15.536 EAL: request: mp_malloc_sync 00:05:15.536 EAL: No shared files mode enabled, IPC is disabled 00:05:15.536 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:15.536 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.536 EAL: request: mp_malloc_sync 00:05:15.536 EAL: No shared files mode enabled, IPC is disabled 00:05:15.536 EAL: Heap on socket 0 was shrunk by 2MB 00:05:15.536 EAL: No shared files mode enabled, IPC is disabled 00:05:15.536 EAL: No shared files mode enabled, IPC is disabled 00:05:15.536 EAL: No shared files mode enabled, IPC is disabled 00:05:15.536 00:05:15.536 real 0m0.867s 00:05:15.536 user 0m0.455s 00:05:15.536 sys 0m0.283s 00:05:15.536 01:46:25 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.536 01:46:25 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:15.536 ************************************ 00:05:15.536 END TEST env_vtophys 00:05:15.536 ************************************ 00:05:15.536 01:46:26 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:15.536 01:46:26 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.536 01:46:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.536 01:46:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:15.536 ************************************ 00:05:15.536 START TEST env_pci 00:05:15.536 ************************************ 00:05:15.536 01:46:26 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:15.536 00:05:15.536 00:05:15.536 CUnit - A unit testing framework for C - Version 2.1-3 00:05:15.536 http://cunit.sourceforge.net/ 00:05:15.536 00:05:15.536 00:05:15.536 Suite: pci 00:05:15.536 Test: pci_hook ...[2024-11-19 01:46:26.032707] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 68886 has claimed it 00:05:15.536 passed 00:05:15.536 00:05:15.536 Run Summary: Type Total Ran Passed Failed Inactive 00:05:15.536 suites 1 1 n/a 0 0 00:05:15.536 tests 1 1 1 0 0 00:05:15.536 asserts 25 25 25 0 n/a 00:05:15.536 00:05:15.536 Elapsed time = 0.002 seconds 00:05:15.536 EAL: Cannot find device (10000:00:01.0) 00:05:15.536 EAL: Failed to attach device on primary process 00:05:15.536 00:05:15.536 real 0m0.017s 00:05:15.536 user 0m0.008s 00:05:15.536 sys 0m0.009s 00:05:15.536 01:46:26 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.536 01:46:26 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:15.536 ************************************ 00:05:15.536 END TEST env_pci 00:05:15.536 ************************************ 00:05:15.536 01:46:26 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:15.536 01:46:26 env -- env/env.sh@15 -- # uname 00:05:15.536 01:46:26 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:15.536 01:46:26 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:15.536 01:46:26 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:15.536 01:46:26 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:15.536 01:46:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.536 01:46:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:15.536 ************************************ 00:05:15.536 START TEST env_dpdk_post_init 00:05:15.536 ************************************ 00:05:15.536 01:46:26 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:15.536 EAL: Detected CPU lcores: 10 00:05:15.536 EAL: Detected NUMA nodes: 1 00:05:15.536 EAL: Detected shared linkage of DPDK 00:05:15.536 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:15.536 EAL: Selected IOVA mode 'PA' 00:05:15.796 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:15.796 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:15.796 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:15.796 Starting DPDK initialization... 00:05:15.796 Starting SPDK post initialization... 00:05:15.796 SPDK NVMe probe 00:05:15.796 Attaching to 0000:00:10.0 00:05:15.796 Attaching to 0000:00:11.0 00:05:15.796 Attached to 0000:00:10.0 00:05:15.796 Attached to 0000:00:11.0 00:05:15.796 Cleaning up... 00:05:15.796 00:05:15.796 real 0m0.182s 00:05:15.796 user 0m0.049s 00:05:15.796 sys 0m0.033s 00:05:15.796 01:46:26 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.796 01:46:26 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:15.796 ************************************ 00:05:15.796 END TEST env_dpdk_post_init 00:05:15.796 ************************************ 00:05:15.796 01:46:26 env -- env/env.sh@26 -- # uname 00:05:15.796 01:46:26 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:15.796 01:46:26 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:15.796 01:46:26 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.796 01:46:26 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.796 01:46:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:15.796 ************************************ 00:05:15.796 START TEST env_mem_callbacks 00:05:15.796 ************************************ 00:05:15.796 01:46:26 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:15.796 EAL: Detected CPU lcores: 10 00:05:15.796 EAL: Detected NUMA nodes: 1 00:05:15.796 EAL: Detected shared linkage of DPDK 00:05:15.796 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:15.796 EAL: Selected IOVA mode 'PA' 00:05:16.055 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:16.055 00:05:16.055 00:05:16.055 CUnit - A unit testing framework for C - Version 2.1-3 00:05:16.055 http://cunit.sourceforge.net/ 00:05:16.055 00:05:16.055 00:05:16.055 Suite: memory 00:05:16.055 Test: test ... 00:05:16.055 register 0x200000200000 2097152 00:05:16.055 malloc 3145728 00:05:16.055 register 0x200000400000 4194304 00:05:16.055 buf 0x200000500000 len 3145728 PASSED 00:05:16.055 malloc 64 00:05:16.055 buf 0x2000004fff40 len 64 PASSED 00:05:16.055 malloc 4194304 00:05:16.055 register 0x200000800000 6291456 00:05:16.055 buf 0x200000a00000 len 4194304 PASSED 00:05:16.055 free 0x200000500000 3145728 00:05:16.055 free 0x2000004fff40 64 00:05:16.055 unregister 0x200000400000 4194304 PASSED 00:05:16.055 free 0x200000a00000 4194304 00:05:16.055 unregister 0x200000800000 6291456 PASSED 00:05:16.055 malloc 8388608 00:05:16.055 register 0x200000400000 10485760 00:05:16.055 buf 0x200000600000 len 8388608 PASSED 00:05:16.055 free 0x200000600000 8388608 00:05:16.055 unregister 0x200000400000 10485760 PASSED 00:05:16.055 passed 00:05:16.055 00:05:16.055 Run Summary: Type Total Ran Passed Failed Inactive 00:05:16.055 suites 1 1 n/a 0 0 00:05:16.055 tests 1 1 1 0 0 00:05:16.055 asserts 15 15 15 0 n/a 00:05:16.055 00:05:16.055 Elapsed time = 0.007 seconds 00:05:16.055 00:05:16.055 real 0m0.139s 00:05:16.055 user 0m0.014s 00:05:16.055 sys 0m0.024s 00:05:16.055 01:46:26 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.055 01:46:26 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:16.055 ************************************ 00:05:16.055 END TEST env_mem_callbacks 00:05:16.055 ************************************ 00:05:16.055 00:05:16.055 real 0m1.894s 00:05:16.055 user 0m0.941s 00:05:16.055 sys 0m0.600s 00:05:16.055 01:46:26 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.055 ************************************ 00:05:16.055 01:46:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:16.055 END TEST env 00:05:16.056 ************************************ 00:05:16.056 01:46:26 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:16.056 01:46:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.056 01:46:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.056 01:46:26 -- common/autotest_common.sh@10 -- # set +x 00:05:16.056 ************************************ 00:05:16.056 START TEST rpc 00:05:16.056 ************************************ 00:05:16.056 01:46:26 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:16.056 * Looking for test storage... 00:05:16.056 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:16.056 01:46:26 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:16.056 01:46:26 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:16.056 01:46:26 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:16.315 01:46:26 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:16.315 01:46:26 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.315 01:46:26 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.315 01:46:26 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.315 01:46:26 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.315 01:46:26 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.315 01:46:26 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.315 01:46:26 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.315 01:46:26 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.315 01:46:26 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.315 01:46:26 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.315 01:46:26 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.315 01:46:26 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:16.315 01:46:26 rpc -- scripts/common.sh@345 -- # : 1 00:05:16.315 01:46:26 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.315 01:46:26 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.315 01:46:26 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:16.315 01:46:26 rpc -- scripts/common.sh@353 -- # local d=1 00:05:16.315 01:46:26 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.315 01:46:26 rpc -- scripts/common.sh@355 -- # echo 1 00:05:16.315 01:46:26 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.315 01:46:26 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:16.315 01:46:26 rpc -- scripts/common.sh@353 -- # local d=2 00:05:16.315 01:46:26 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.315 01:46:26 rpc -- scripts/common.sh@355 -- # echo 2 00:05:16.315 01:46:26 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.315 01:46:26 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.315 01:46:26 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.315 01:46:26 rpc -- scripts/common.sh@368 -- # return 0 00:05:16.315 01:46:26 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.315 01:46:26 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:16.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.315 --rc genhtml_branch_coverage=1 00:05:16.315 --rc genhtml_function_coverage=1 00:05:16.315 --rc genhtml_legend=1 00:05:16.315 --rc geninfo_all_blocks=1 00:05:16.315 --rc geninfo_unexecuted_blocks=1 00:05:16.315 00:05:16.315 ' 00:05:16.315 01:46:26 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:16.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.315 --rc genhtml_branch_coverage=1 00:05:16.315 --rc genhtml_function_coverage=1 00:05:16.315 --rc genhtml_legend=1 00:05:16.315 --rc geninfo_all_blocks=1 00:05:16.315 --rc geninfo_unexecuted_blocks=1 00:05:16.315 00:05:16.315 ' 00:05:16.315 01:46:26 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:16.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.315 --rc genhtml_branch_coverage=1 00:05:16.315 --rc genhtml_function_coverage=1 00:05:16.315 --rc genhtml_legend=1 00:05:16.315 --rc geninfo_all_blocks=1 00:05:16.315 --rc geninfo_unexecuted_blocks=1 00:05:16.315 00:05:16.315 ' 00:05:16.315 01:46:26 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:16.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.315 --rc genhtml_branch_coverage=1 00:05:16.315 --rc genhtml_function_coverage=1 00:05:16.315 --rc genhtml_legend=1 00:05:16.315 --rc geninfo_all_blocks=1 00:05:16.315 --rc geninfo_unexecuted_blocks=1 00:05:16.315 00:05:16.315 ' 00:05:16.315 01:46:26 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69004 00:05:16.315 01:46:26 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:16.316 01:46:26 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.316 01:46:26 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69004 00:05:16.316 01:46:26 rpc -- common/autotest_common.sh@835 -- # '[' -z 69004 ']' 00:05:16.316 01:46:26 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.316 01:46:26 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.316 01:46:26 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.316 01:46:26 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.316 01:46:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.316 [2024-11-19 01:46:26.819290] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:05:16.316 [2024-11-19 01:46:26.819395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69004 ] 00:05:16.575 [2024-11-19 01:46:26.970977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.575 [2024-11-19 01:46:26.995679] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:16.575 [2024-11-19 01:46:26.995741] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69004' to capture a snapshot of events at runtime. 00:05:16.575 [2024-11-19 01:46:26.995756] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:16.575 [2024-11-19 01:46:26.995766] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:16.575 [2024-11-19 01:46:26.995775] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69004 for offline analysis/debug. 00:05:16.575 [2024-11-19 01:46:26.996144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.575 [2024-11-19 01:46:27.041286] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:16.575 01:46:27 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.575 01:46:27 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:16.575 01:46:27 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:16.575 01:46:27 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:16.575 01:46:27 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:16.575 01:46:27 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:16.575 01:46:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.575 01:46:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.575 01:46:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.575 ************************************ 00:05:16.575 START TEST rpc_integrity 00:05:16.575 ************************************ 00:05:16.575 01:46:27 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:16.575 01:46:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:16.834 01:46:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.834 01:46:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.834 01:46:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.834 01:46:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:16.834 01:46:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:16.834 01:46:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:16.834 01:46:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:16.834 01:46:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.834 01:46:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.834 01:46:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.834 01:46:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:16.834 01:46:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:16.834 01:46:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.834 01:46:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.834 01:46:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.834 01:46:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:16.834 { 00:05:16.834 "name": "Malloc0", 00:05:16.834 "aliases": [ 00:05:16.834 "8a4d37da-7788-4464-9146-e68f1dd6d4ee" 00:05:16.834 ], 00:05:16.834 "product_name": "Malloc disk", 00:05:16.834 "block_size": 512, 00:05:16.834 "num_blocks": 16384, 00:05:16.834 "uuid": "8a4d37da-7788-4464-9146-e68f1dd6d4ee", 00:05:16.834 "assigned_rate_limits": { 00:05:16.834 "rw_ios_per_sec": 0, 00:05:16.834 "rw_mbytes_per_sec": 0, 00:05:16.834 "r_mbytes_per_sec": 0, 00:05:16.834 "w_mbytes_per_sec": 0 00:05:16.834 }, 00:05:16.834 "claimed": false, 00:05:16.834 "zoned": false, 00:05:16.834 "supported_io_types": { 00:05:16.834 "read": true, 00:05:16.834 "write": true, 00:05:16.834 "unmap": true, 00:05:16.834 "flush": true, 00:05:16.834 "reset": true, 00:05:16.834 "nvme_admin": false, 00:05:16.834 "nvme_io": false, 00:05:16.834 "nvme_io_md": false, 00:05:16.834 "write_zeroes": true, 00:05:16.834 "zcopy": true, 00:05:16.834 "get_zone_info": false, 00:05:16.834 "zone_management": false, 00:05:16.834 "zone_append": false, 00:05:16.834 "compare": false, 00:05:16.834 "compare_and_write": false, 00:05:16.834 "abort": true, 00:05:16.834 "seek_hole": false, 00:05:16.834 "seek_data": false, 00:05:16.834 "copy": true, 00:05:16.834 "nvme_iov_md": false 00:05:16.834 }, 00:05:16.834 "memory_domains": [ 00:05:16.834 { 00:05:16.834 "dma_device_id": "system", 00:05:16.834 "dma_device_type": 1 00:05:16.834 }, 00:05:16.834 { 00:05:16.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.834 "dma_device_type": 2 00:05:16.834 } 00:05:16.834 ], 00:05:16.834 "driver_specific": {} 00:05:16.834 } 00:05:16.834 ]' 00:05:16.834 01:46:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:16.834 01:46:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:16.834 01:46:27 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:16.834 01:46:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.834 01:46:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.834 [2024-11-19 01:46:27.349748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:16.834 [2024-11-19 01:46:27.349811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:16.834 [2024-11-19 01:46:27.349833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e32200 00:05:16.834 [2024-11-19 01:46:27.349858] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:16.834 [2024-11-19 01:46:27.351407] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:16.834 [2024-11-19 01:46:27.351442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:16.834 Passthru0 00:05:16.834 01:46:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.834 01:46:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:16.834 01:46:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.834 01:46:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.834 01:46:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.834 01:46:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:16.834 { 00:05:16.834 "name": "Malloc0", 00:05:16.834 "aliases": [ 00:05:16.834 "8a4d37da-7788-4464-9146-e68f1dd6d4ee" 00:05:16.834 ], 00:05:16.834 "product_name": "Malloc disk", 00:05:16.834 "block_size": 512, 00:05:16.834 "num_blocks": 16384, 00:05:16.834 "uuid": "8a4d37da-7788-4464-9146-e68f1dd6d4ee", 00:05:16.834 "assigned_rate_limits": { 00:05:16.834 "rw_ios_per_sec": 0, 00:05:16.834 "rw_mbytes_per_sec": 0, 00:05:16.834 "r_mbytes_per_sec": 0, 00:05:16.834 "w_mbytes_per_sec": 0 00:05:16.834 }, 00:05:16.835 "claimed": true, 00:05:16.835 "claim_type": "exclusive_write", 00:05:16.835 "zoned": false, 00:05:16.835 "supported_io_types": { 00:05:16.835 "read": true, 00:05:16.835 "write": true, 00:05:16.835 "unmap": true, 00:05:16.835 "flush": true, 00:05:16.835 "reset": true, 00:05:16.835 "nvme_admin": false, 00:05:16.835 "nvme_io": false, 00:05:16.835 "nvme_io_md": false, 00:05:16.835 "write_zeroes": true, 00:05:16.835 "zcopy": true, 00:05:16.835 "get_zone_info": false, 00:05:16.835 "zone_management": false, 00:05:16.835 "zone_append": false, 00:05:16.835 "compare": false, 00:05:16.835 "compare_and_write": false, 00:05:16.835 "abort": true, 00:05:16.835 "seek_hole": false, 00:05:16.835 "seek_data": false, 00:05:16.835 "copy": true, 00:05:16.835 "nvme_iov_md": false 00:05:16.835 }, 00:05:16.835 "memory_domains": [ 00:05:16.835 { 00:05:16.835 "dma_device_id": "system", 00:05:16.835 "dma_device_type": 1 00:05:16.835 }, 00:05:16.835 { 00:05:16.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.835 "dma_device_type": 2 00:05:16.835 } 00:05:16.835 ], 00:05:16.835 "driver_specific": {} 00:05:16.835 }, 00:05:16.835 { 00:05:16.835 "name": "Passthru0", 00:05:16.835 "aliases": [ 00:05:16.835 "e270befe-2c1d-54b7-8c28-d8e523e6d1e3" 00:05:16.835 ], 00:05:16.835 "product_name": "passthru", 00:05:16.835 "block_size": 512, 00:05:16.835 "num_blocks": 16384, 00:05:16.835 "uuid": "e270befe-2c1d-54b7-8c28-d8e523e6d1e3", 00:05:16.835 "assigned_rate_limits": { 00:05:16.835 "rw_ios_per_sec": 0, 00:05:16.835 "rw_mbytes_per_sec": 0, 00:05:16.835 "r_mbytes_per_sec": 0, 00:05:16.835 "w_mbytes_per_sec": 0 00:05:16.835 }, 00:05:16.835 "claimed": false, 00:05:16.835 "zoned": false, 00:05:16.835 "supported_io_types": { 00:05:16.835 "read": true, 00:05:16.835 "write": true, 00:05:16.835 "unmap": true, 00:05:16.835 "flush": true, 00:05:16.835 "reset": true, 00:05:16.835 "nvme_admin": false, 00:05:16.835 "nvme_io": false, 00:05:16.835 "nvme_io_md": false, 00:05:16.835 "write_zeroes": true, 00:05:16.835 "zcopy": true, 00:05:16.835 "get_zone_info": false, 00:05:16.835 "zone_management": false, 00:05:16.835 "zone_append": false, 00:05:16.835 "compare": false, 00:05:16.835 "compare_and_write": false, 00:05:16.835 "abort": true, 00:05:16.835 "seek_hole": false, 00:05:16.835 "seek_data": false, 00:05:16.835 "copy": true, 00:05:16.835 "nvme_iov_md": false 00:05:16.835 }, 00:05:16.835 "memory_domains": [ 00:05:16.835 { 00:05:16.835 "dma_device_id": "system", 00:05:16.835 "dma_device_type": 1 00:05:16.835 }, 00:05:16.835 { 00:05:16.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.835 "dma_device_type": 2 00:05:16.835 } 00:05:16.835 ], 00:05:16.835 "driver_specific": { 00:05:16.835 "passthru": { 00:05:16.835 "name": "Passthru0", 00:05:16.835 "base_bdev_name": "Malloc0" 00:05:16.835 } 00:05:16.835 } 00:05:16.835 } 00:05:16.835 ]' 00:05:16.835 01:46:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:16.835 01:46:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:16.835 01:46:27 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:16.835 01:46:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.835 01:46:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.093 01:46:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.094 01:46:27 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:17.094 01:46:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.094 01:46:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.094 01:46:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.094 01:46:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:17.094 01:46:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.094 01:46:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.094 01:46:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.094 01:46:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:17.094 01:46:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:17.094 ************************************ 00:05:17.094 END TEST rpc_integrity 00:05:17.094 ************************************ 00:05:17.094 01:46:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:17.094 00:05:17.094 real 0m0.339s 00:05:17.094 user 0m0.227s 00:05:17.094 sys 0m0.041s 00:05:17.094 01:46:27 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.094 01:46:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.094 01:46:27 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:17.094 01:46:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.094 01:46:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.094 01:46:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.094 ************************************ 00:05:17.094 START TEST rpc_plugins 00:05:17.094 ************************************ 00:05:17.094 01:46:27 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:17.094 01:46:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:17.094 01:46:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.094 01:46:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:17.094 01:46:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.094 01:46:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:17.094 01:46:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:17.094 01:46:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.094 01:46:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:17.094 01:46:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.094 01:46:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:17.094 { 00:05:17.094 "name": "Malloc1", 00:05:17.094 "aliases": [ 00:05:17.094 "9526befa-67d2-47e1-96ff-7b15c43cb317" 00:05:17.094 ], 00:05:17.094 "product_name": "Malloc disk", 00:05:17.094 "block_size": 4096, 00:05:17.094 "num_blocks": 256, 00:05:17.094 "uuid": "9526befa-67d2-47e1-96ff-7b15c43cb317", 00:05:17.094 "assigned_rate_limits": { 00:05:17.094 "rw_ios_per_sec": 0, 00:05:17.094 "rw_mbytes_per_sec": 0, 00:05:17.094 "r_mbytes_per_sec": 0, 00:05:17.094 "w_mbytes_per_sec": 0 00:05:17.094 }, 00:05:17.094 "claimed": false, 00:05:17.094 "zoned": false, 00:05:17.094 "supported_io_types": { 00:05:17.094 "read": true, 00:05:17.094 "write": true, 00:05:17.094 "unmap": true, 00:05:17.094 "flush": true, 00:05:17.094 "reset": true, 00:05:17.094 "nvme_admin": false, 00:05:17.094 "nvme_io": false, 00:05:17.094 "nvme_io_md": false, 00:05:17.094 "write_zeroes": true, 00:05:17.094 "zcopy": true, 00:05:17.094 "get_zone_info": false, 00:05:17.094 "zone_management": false, 00:05:17.094 "zone_append": false, 00:05:17.094 "compare": false, 00:05:17.094 "compare_and_write": false, 00:05:17.094 "abort": true, 00:05:17.094 "seek_hole": false, 00:05:17.094 "seek_data": false, 00:05:17.094 "copy": true, 00:05:17.094 "nvme_iov_md": false 00:05:17.094 }, 00:05:17.094 "memory_domains": [ 00:05:17.094 { 00:05:17.094 "dma_device_id": "system", 00:05:17.094 "dma_device_type": 1 00:05:17.094 }, 00:05:17.094 { 00:05:17.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.094 "dma_device_type": 2 00:05:17.094 } 00:05:17.094 ], 00:05:17.094 "driver_specific": {} 00:05:17.094 } 00:05:17.094 ]' 00:05:17.094 01:46:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:17.094 01:46:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:17.094 01:46:27 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:17.094 01:46:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.094 01:46:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:17.094 01:46:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.094 01:46:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:17.094 01:46:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.094 01:46:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:17.094 01:46:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.094 01:46:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:17.094 01:46:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:17.353 ************************************ 00:05:17.354 END TEST rpc_plugins 00:05:17.354 ************************************ 00:05:17.354 01:46:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:17.354 00:05:17.354 real 0m0.160s 00:05:17.354 user 0m0.112s 00:05:17.354 sys 0m0.014s 00:05:17.354 01:46:27 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.354 01:46:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:17.354 01:46:27 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:17.354 01:46:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.354 01:46:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.354 01:46:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.354 ************************************ 00:05:17.354 START TEST rpc_trace_cmd_test 00:05:17.354 ************************************ 00:05:17.354 01:46:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:17.354 01:46:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:17.354 01:46:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:17.354 01:46:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.354 01:46:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:17.354 01:46:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.354 01:46:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:17.354 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69004", 00:05:17.354 "tpoint_group_mask": "0x8", 00:05:17.354 "iscsi_conn": { 00:05:17.354 "mask": "0x2", 00:05:17.354 "tpoint_mask": "0x0" 00:05:17.354 }, 00:05:17.354 "scsi": { 00:05:17.354 "mask": "0x4", 00:05:17.354 "tpoint_mask": "0x0" 00:05:17.354 }, 00:05:17.354 "bdev": { 00:05:17.354 "mask": "0x8", 00:05:17.354 "tpoint_mask": "0xffffffffffffffff" 00:05:17.354 }, 00:05:17.354 "nvmf_rdma": { 00:05:17.354 "mask": "0x10", 00:05:17.354 "tpoint_mask": "0x0" 00:05:17.354 }, 00:05:17.354 "nvmf_tcp": { 00:05:17.354 "mask": "0x20", 00:05:17.354 "tpoint_mask": "0x0" 00:05:17.354 }, 00:05:17.354 "ftl": { 00:05:17.354 "mask": "0x40", 00:05:17.354 "tpoint_mask": "0x0" 00:05:17.354 }, 00:05:17.354 "blobfs": { 00:05:17.354 "mask": "0x80", 00:05:17.354 "tpoint_mask": "0x0" 00:05:17.354 }, 00:05:17.354 "dsa": { 00:05:17.354 "mask": "0x200", 00:05:17.354 "tpoint_mask": "0x0" 00:05:17.354 }, 00:05:17.354 "thread": { 00:05:17.354 "mask": "0x400", 00:05:17.354 "tpoint_mask": "0x0" 00:05:17.354 }, 00:05:17.354 "nvme_pcie": { 00:05:17.354 "mask": "0x800", 00:05:17.354 "tpoint_mask": "0x0" 00:05:17.354 }, 00:05:17.354 "iaa": { 00:05:17.354 "mask": "0x1000", 00:05:17.354 "tpoint_mask": "0x0" 00:05:17.354 }, 00:05:17.354 "nvme_tcp": { 00:05:17.354 "mask": "0x2000", 00:05:17.354 "tpoint_mask": "0x0" 00:05:17.354 }, 00:05:17.354 "bdev_nvme": { 00:05:17.354 "mask": "0x4000", 00:05:17.354 "tpoint_mask": "0x0" 00:05:17.354 }, 00:05:17.354 "sock": { 00:05:17.354 "mask": "0x8000", 00:05:17.354 "tpoint_mask": "0x0" 00:05:17.354 }, 00:05:17.354 "blob": { 00:05:17.354 "mask": "0x10000", 00:05:17.354 "tpoint_mask": "0x0" 00:05:17.354 }, 00:05:17.354 "bdev_raid": { 00:05:17.354 "mask": "0x20000", 00:05:17.354 "tpoint_mask": "0x0" 00:05:17.354 }, 00:05:17.354 "scheduler": { 00:05:17.354 "mask": "0x40000", 00:05:17.354 "tpoint_mask": "0x0" 00:05:17.354 } 00:05:17.354 }' 00:05:17.354 01:46:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:17.354 01:46:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:17.354 01:46:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:17.354 01:46:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:17.354 01:46:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:17.613 01:46:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:17.613 01:46:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:17.613 01:46:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:17.613 01:46:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:17.613 ************************************ 00:05:17.613 END TEST rpc_trace_cmd_test 00:05:17.613 ************************************ 00:05:17.613 01:46:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:17.613 00:05:17.613 real 0m0.276s 00:05:17.613 user 0m0.244s 00:05:17.613 sys 0m0.024s 00:05:17.613 01:46:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.613 01:46:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:17.613 01:46:28 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:17.613 01:46:28 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:17.613 01:46:28 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:17.613 01:46:28 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.613 01:46:28 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.613 01:46:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.613 ************************************ 00:05:17.613 START TEST rpc_daemon_integrity 00:05:17.613 ************************************ 00:05:17.613 01:46:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:17.613 01:46:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:17.613 01:46:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.614 01:46:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.614 01:46:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.614 01:46:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:17.614 01:46:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:17.614 01:46:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:17.614 01:46:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:17.614 01:46:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.614 01:46:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.614 01:46:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.614 01:46:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:17.614 01:46:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:17.614 01:46:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.614 01:46:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.614 01:46:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.614 01:46:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:17.614 { 00:05:17.614 "name": "Malloc2", 00:05:17.614 "aliases": [ 00:05:17.614 "ff18e6fb-d4b9-4b72-9323-83f9ce3471f2" 00:05:17.614 ], 00:05:17.614 "product_name": "Malloc disk", 00:05:17.614 "block_size": 512, 00:05:17.614 "num_blocks": 16384, 00:05:17.614 "uuid": "ff18e6fb-d4b9-4b72-9323-83f9ce3471f2", 00:05:17.614 "assigned_rate_limits": { 00:05:17.614 "rw_ios_per_sec": 0, 00:05:17.614 "rw_mbytes_per_sec": 0, 00:05:17.614 "r_mbytes_per_sec": 0, 00:05:17.614 "w_mbytes_per_sec": 0 00:05:17.614 }, 00:05:17.614 "claimed": false, 00:05:17.614 "zoned": false, 00:05:17.614 "supported_io_types": { 00:05:17.614 "read": true, 00:05:17.614 "write": true, 00:05:17.614 "unmap": true, 00:05:17.614 "flush": true, 00:05:17.614 "reset": true, 00:05:17.614 "nvme_admin": false, 00:05:17.614 "nvme_io": false, 00:05:17.614 "nvme_io_md": false, 00:05:17.614 "write_zeroes": true, 00:05:17.614 "zcopy": true, 00:05:17.614 "get_zone_info": false, 00:05:17.614 "zone_management": false, 00:05:17.614 "zone_append": false, 00:05:17.614 "compare": false, 00:05:17.614 "compare_and_write": false, 00:05:17.614 "abort": true, 00:05:17.614 "seek_hole": false, 00:05:17.614 "seek_data": false, 00:05:17.614 "copy": true, 00:05:17.614 "nvme_iov_md": false 00:05:17.614 }, 00:05:17.614 "memory_domains": [ 00:05:17.614 { 00:05:17.614 "dma_device_id": "system", 00:05:17.614 "dma_device_type": 1 00:05:17.614 }, 00:05:17.614 { 00:05:17.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.614 "dma_device_type": 2 00:05:17.614 } 00:05:17.614 ], 00:05:17.614 "driver_specific": {} 00:05:17.614 } 00:05:17.614 ]' 00:05:17.614 01:46:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:17.874 01:46:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:17.874 01:46:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:17.874 01:46:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.874 01:46:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.874 [2024-11-19 01:46:28.270117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:17.874 [2024-11-19 01:46:28.270174] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:17.874 [2024-11-19 01:46:28.270193] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1cf0430 00:05:17.874 [2024-11-19 01:46:28.270201] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:17.874 [2024-11-19 01:46:28.271390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:17.874 [2024-11-19 01:46:28.271425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:17.874 Passthru0 00:05:17.874 01:46:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.874 01:46:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:17.874 01:46:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.874 01:46:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.874 01:46:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.874 01:46:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:17.874 { 00:05:17.874 "name": "Malloc2", 00:05:17.874 "aliases": [ 00:05:17.874 "ff18e6fb-d4b9-4b72-9323-83f9ce3471f2" 00:05:17.874 ], 00:05:17.874 "product_name": "Malloc disk", 00:05:17.874 "block_size": 512, 00:05:17.874 "num_blocks": 16384, 00:05:17.874 "uuid": "ff18e6fb-d4b9-4b72-9323-83f9ce3471f2", 00:05:17.874 "assigned_rate_limits": { 00:05:17.874 "rw_ios_per_sec": 0, 00:05:17.874 "rw_mbytes_per_sec": 0, 00:05:17.874 "r_mbytes_per_sec": 0, 00:05:17.874 "w_mbytes_per_sec": 0 00:05:17.874 }, 00:05:17.874 "claimed": true, 00:05:17.874 "claim_type": "exclusive_write", 00:05:17.874 "zoned": false, 00:05:17.874 "supported_io_types": { 00:05:17.874 "read": true, 00:05:17.874 "write": true, 00:05:17.874 "unmap": true, 00:05:17.874 "flush": true, 00:05:17.874 "reset": true, 00:05:17.874 "nvme_admin": false, 00:05:17.874 "nvme_io": false, 00:05:17.874 "nvme_io_md": false, 00:05:17.874 "write_zeroes": true, 00:05:17.874 "zcopy": true, 00:05:17.874 "get_zone_info": false, 00:05:17.874 "zone_management": false, 00:05:17.874 "zone_append": false, 00:05:17.874 "compare": false, 00:05:17.874 "compare_and_write": false, 00:05:17.874 "abort": true, 00:05:17.874 "seek_hole": false, 00:05:17.874 "seek_data": false, 00:05:17.874 "copy": true, 00:05:17.874 "nvme_iov_md": false 00:05:17.874 }, 00:05:17.874 "memory_domains": [ 00:05:17.874 { 00:05:17.874 "dma_device_id": "system", 00:05:17.874 "dma_device_type": 1 00:05:17.874 }, 00:05:17.874 { 00:05:17.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.874 "dma_device_type": 2 00:05:17.874 } 00:05:17.874 ], 00:05:17.874 "driver_specific": {} 00:05:17.874 }, 00:05:17.874 { 00:05:17.874 "name": "Passthru0", 00:05:17.874 "aliases": [ 00:05:17.874 "e0d84d46-6811-5e41-b7dc-288bbc861332" 00:05:17.874 ], 00:05:17.874 "product_name": "passthru", 00:05:17.874 "block_size": 512, 00:05:17.874 "num_blocks": 16384, 00:05:17.874 "uuid": "e0d84d46-6811-5e41-b7dc-288bbc861332", 00:05:17.874 "assigned_rate_limits": { 00:05:17.874 "rw_ios_per_sec": 0, 00:05:17.874 "rw_mbytes_per_sec": 0, 00:05:17.874 "r_mbytes_per_sec": 0, 00:05:17.874 "w_mbytes_per_sec": 0 00:05:17.874 }, 00:05:17.874 "claimed": false, 00:05:17.874 "zoned": false, 00:05:17.874 "supported_io_types": { 00:05:17.874 "read": true, 00:05:17.874 "write": true, 00:05:17.874 "unmap": true, 00:05:17.874 "flush": true, 00:05:17.874 "reset": true, 00:05:17.874 "nvme_admin": false, 00:05:17.874 "nvme_io": false, 00:05:17.874 "nvme_io_md": false, 00:05:17.874 "write_zeroes": true, 00:05:17.874 "zcopy": true, 00:05:17.874 "get_zone_info": false, 00:05:17.874 "zone_management": false, 00:05:17.874 "zone_append": false, 00:05:17.874 "compare": false, 00:05:17.874 "compare_and_write": false, 00:05:17.874 "abort": true, 00:05:17.874 "seek_hole": false, 00:05:17.874 "seek_data": false, 00:05:17.874 "copy": true, 00:05:17.874 "nvme_iov_md": false 00:05:17.874 }, 00:05:17.874 "memory_domains": [ 00:05:17.874 { 00:05:17.874 "dma_device_id": "system", 00:05:17.874 "dma_device_type": 1 00:05:17.874 }, 00:05:17.874 { 00:05:17.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.874 "dma_device_type": 2 00:05:17.874 } 00:05:17.874 ], 00:05:17.874 "driver_specific": { 00:05:17.874 "passthru": { 00:05:17.874 "name": "Passthru0", 00:05:17.874 "base_bdev_name": "Malloc2" 00:05:17.874 } 00:05:17.874 } 00:05:17.874 } 00:05:17.874 ]' 00:05:17.874 01:46:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:17.874 01:46:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:17.874 01:46:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:17.874 01:46:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.874 01:46:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.874 01:46:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.874 01:46:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:17.874 01:46:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.874 01:46:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.874 01:46:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.874 01:46:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:17.874 01:46:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.874 01:46:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.874 01:46:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.874 01:46:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:17.874 01:46:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:17.874 ************************************ 00:05:17.874 END TEST rpc_daemon_integrity 00:05:17.874 ************************************ 00:05:17.874 01:46:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:17.874 00:05:17.874 real 0m0.325s 00:05:17.874 user 0m0.222s 00:05:17.874 sys 0m0.036s 00:05:17.874 01:46:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.874 01:46:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.874 01:46:28 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:17.874 01:46:28 rpc -- rpc/rpc.sh@84 -- # killprocess 69004 00:05:17.874 01:46:28 rpc -- common/autotest_common.sh@954 -- # '[' -z 69004 ']' 00:05:17.874 01:46:28 rpc -- common/autotest_common.sh@958 -- # kill -0 69004 00:05:17.874 01:46:28 rpc -- common/autotest_common.sh@959 -- # uname 00:05:18.134 01:46:28 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.134 01:46:28 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69004 00:05:18.134 killing process with pid 69004 00:05:18.134 01:46:28 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.134 01:46:28 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.134 01:46:28 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69004' 00:05:18.134 01:46:28 rpc -- common/autotest_common.sh@973 -- # kill 69004 00:05:18.134 01:46:28 rpc -- common/autotest_common.sh@978 -- # wait 69004 00:05:18.134 ************************************ 00:05:18.134 END TEST rpc 00:05:18.134 ************************************ 00:05:18.134 00:05:18.134 real 0m2.176s 00:05:18.134 user 0m2.952s 00:05:18.134 sys 0m0.563s 00:05:18.134 01:46:28 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.134 01:46:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.393 01:46:28 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:18.393 01:46:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.393 01:46:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.393 01:46:28 -- common/autotest_common.sh@10 -- # set +x 00:05:18.393 ************************************ 00:05:18.393 START TEST skip_rpc 00:05:18.393 ************************************ 00:05:18.393 01:46:28 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:18.393 * Looking for test storage... 00:05:18.393 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:18.393 01:46:28 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:18.393 01:46:28 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:18.393 01:46:28 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:18.393 01:46:28 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:18.393 01:46:28 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.393 01:46:28 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.393 01:46:28 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.393 01:46:28 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.393 01:46:28 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.393 01:46:28 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.393 01:46:28 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.393 01:46:28 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.393 01:46:28 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.393 01:46:28 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.393 01:46:28 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.393 01:46:28 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:18.393 01:46:28 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:18.393 01:46:28 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.393 01:46:28 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.393 01:46:28 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:18.393 01:46:28 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:18.394 01:46:28 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.394 01:46:28 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:18.394 01:46:28 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.394 01:46:28 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:18.394 01:46:28 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:18.394 01:46:28 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.394 01:46:28 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:18.394 01:46:28 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.394 01:46:28 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.394 01:46:28 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.394 01:46:28 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:18.394 01:46:28 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.394 01:46:28 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:18.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.394 --rc genhtml_branch_coverage=1 00:05:18.394 --rc genhtml_function_coverage=1 00:05:18.394 --rc genhtml_legend=1 00:05:18.394 --rc geninfo_all_blocks=1 00:05:18.394 --rc geninfo_unexecuted_blocks=1 00:05:18.394 00:05:18.394 ' 00:05:18.394 01:46:28 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:18.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.394 --rc genhtml_branch_coverage=1 00:05:18.394 --rc genhtml_function_coverage=1 00:05:18.394 --rc genhtml_legend=1 00:05:18.394 --rc geninfo_all_blocks=1 00:05:18.394 --rc geninfo_unexecuted_blocks=1 00:05:18.394 00:05:18.394 ' 00:05:18.394 01:46:28 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:18.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.394 --rc genhtml_branch_coverage=1 00:05:18.394 --rc genhtml_function_coverage=1 00:05:18.394 --rc genhtml_legend=1 00:05:18.394 --rc geninfo_all_blocks=1 00:05:18.394 --rc geninfo_unexecuted_blocks=1 00:05:18.394 00:05:18.394 ' 00:05:18.394 01:46:28 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:18.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.394 --rc genhtml_branch_coverage=1 00:05:18.394 --rc genhtml_function_coverage=1 00:05:18.394 --rc genhtml_legend=1 00:05:18.394 --rc geninfo_all_blocks=1 00:05:18.394 --rc geninfo_unexecuted_blocks=1 00:05:18.394 00:05:18.394 ' 00:05:18.394 01:46:28 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:18.394 01:46:28 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:18.394 01:46:28 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:18.394 01:46:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.394 01:46:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.394 01:46:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.394 ************************************ 00:05:18.394 START TEST skip_rpc 00:05:18.394 ************************************ 00:05:18.394 01:46:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:18.394 01:46:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69197 00:05:18.394 01:46:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:18.394 01:46:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.394 01:46:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:18.655 [2024-11-19 01:46:29.044766] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:05:18.655 [2024-11-19 01:46:29.045267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69197 ] 00:05:18.655 [2024-11-19 01:46:29.191807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.655 [2024-11-19 01:46:29.210561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.655 [2024-11-19 01:46:29.242855] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:23.930 01:46:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:23.930 01:46:33 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:23.930 01:46:33 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:23.930 01:46:33 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:23.930 01:46:33 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:23.930 01:46:33 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:23.930 01:46:33 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:23.930 01:46:33 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:23.930 01:46:33 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.930 01:46:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.930 01:46:33 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:23.930 01:46:33 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:23.930 01:46:33 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:23.930 01:46:33 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:23.930 01:46:33 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:23.930 01:46:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:23.930 01:46:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69197 00:05:23.930 01:46:33 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 69197 ']' 00:05:23.930 01:46:33 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 69197 00:05:23.930 01:46:33 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:23.930 01:46:33 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.930 01:46:33 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69197 00:05:23.930 killing process with pid 69197 00:05:23.930 01:46:34 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.930 01:46:34 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.930 01:46:34 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69197' 00:05:23.930 01:46:34 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 69197 00:05:23.930 01:46:34 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 69197 00:05:23.930 ************************************ 00:05:23.930 END TEST skip_rpc 00:05:23.930 ************************************ 00:05:23.930 00:05:23.930 real 0m5.273s 00:05:23.930 user 0m4.998s 00:05:23.930 sys 0m0.191s 00:05:23.930 01:46:34 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.930 01:46:34 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.930 01:46:34 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:23.930 01:46:34 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.930 01:46:34 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.930 01:46:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.930 ************************************ 00:05:23.930 START TEST skip_rpc_with_json 00:05:23.930 ************************************ 00:05:23.930 01:46:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:23.930 01:46:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:23.930 01:46:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69284 00:05:23.930 01:46:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.930 01:46:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69284 00:05:23.931 01:46:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 69284 ']' 00:05:23.931 01:46:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.931 01:46:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.931 01:46:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.931 01:46:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.931 01:46:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.931 01:46:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:23.931 [2024-11-19 01:46:34.364643] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:05:23.931 [2024-11-19 01:46:34.364744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69284 ] 00:05:23.931 [2024-11-19 01:46:34.510589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.931 [2024-11-19 01:46:34.529118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.190 [2024-11-19 01:46:34.565255] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:24.190 01:46:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.190 01:46:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:24.190 01:46:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:24.190 01:46:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.190 01:46:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:24.190 [2024-11-19 01:46:34.682435] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:24.190 request: 00:05:24.190 { 00:05:24.190 "trtype": "tcp", 00:05:24.190 "method": "nvmf_get_transports", 00:05:24.190 "req_id": 1 00:05:24.190 } 00:05:24.191 Got JSON-RPC error response 00:05:24.191 response: 00:05:24.191 { 00:05:24.191 "code": -19, 00:05:24.191 "message": "No such device" 00:05:24.191 } 00:05:24.191 01:46:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:24.191 01:46:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:24.191 01:46:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.191 01:46:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:24.191 [2024-11-19 01:46:34.694474] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:24.191 01:46:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.191 01:46:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:24.191 01:46:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.191 01:46:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:24.451 01:46:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.451 01:46:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:24.451 { 00:05:24.451 "subsystems": [ 00:05:24.451 { 00:05:24.451 "subsystem": "fsdev", 00:05:24.451 "config": [ 00:05:24.451 { 00:05:24.451 "method": "fsdev_set_opts", 00:05:24.451 "params": { 00:05:24.451 "fsdev_io_pool_size": 65535, 00:05:24.451 "fsdev_io_cache_size": 256 00:05:24.451 } 00:05:24.451 } 00:05:24.451 ] 00:05:24.451 }, 00:05:24.451 { 00:05:24.451 "subsystem": "keyring", 00:05:24.451 "config": [] 00:05:24.451 }, 00:05:24.451 { 00:05:24.451 "subsystem": "iobuf", 00:05:24.451 "config": [ 00:05:24.451 { 00:05:24.451 "method": "iobuf_set_options", 00:05:24.451 "params": { 00:05:24.451 "small_pool_count": 8192, 00:05:24.451 "large_pool_count": 1024, 00:05:24.451 "small_bufsize": 8192, 00:05:24.451 "large_bufsize": 135168, 00:05:24.451 "enable_numa": false 00:05:24.451 } 00:05:24.451 } 00:05:24.451 ] 00:05:24.451 }, 00:05:24.451 { 00:05:24.451 "subsystem": "sock", 00:05:24.451 "config": [ 00:05:24.451 { 00:05:24.451 "method": "sock_set_default_impl", 00:05:24.451 "params": { 00:05:24.451 "impl_name": "uring" 00:05:24.451 } 00:05:24.451 }, 00:05:24.451 { 00:05:24.451 "method": "sock_impl_set_options", 00:05:24.451 "params": { 00:05:24.451 "impl_name": "ssl", 00:05:24.451 "recv_buf_size": 4096, 00:05:24.451 "send_buf_size": 4096, 00:05:24.451 "enable_recv_pipe": true, 00:05:24.451 "enable_quickack": false, 00:05:24.451 "enable_placement_id": 0, 00:05:24.451 "enable_zerocopy_send_server": true, 00:05:24.451 "enable_zerocopy_send_client": false, 00:05:24.451 "zerocopy_threshold": 0, 00:05:24.451 "tls_version": 0, 00:05:24.451 "enable_ktls": false 00:05:24.451 } 00:05:24.451 }, 00:05:24.451 { 00:05:24.451 "method": "sock_impl_set_options", 00:05:24.451 "params": { 00:05:24.451 "impl_name": "posix", 00:05:24.451 "recv_buf_size": 2097152, 00:05:24.451 "send_buf_size": 2097152, 00:05:24.451 "enable_recv_pipe": true, 00:05:24.451 "enable_quickack": false, 00:05:24.451 "enable_placement_id": 0, 00:05:24.451 "enable_zerocopy_send_server": true, 00:05:24.451 "enable_zerocopy_send_client": false, 00:05:24.451 "zerocopy_threshold": 0, 00:05:24.451 "tls_version": 0, 00:05:24.451 "enable_ktls": false 00:05:24.451 } 00:05:24.451 }, 00:05:24.451 { 00:05:24.451 "method": "sock_impl_set_options", 00:05:24.451 "params": { 00:05:24.451 "impl_name": "uring", 00:05:24.451 "recv_buf_size": 2097152, 00:05:24.451 "send_buf_size": 2097152, 00:05:24.451 "enable_recv_pipe": true, 00:05:24.451 "enable_quickack": false, 00:05:24.451 "enable_placement_id": 0, 00:05:24.451 "enable_zerocopy_send_server": false, 00:05:24.451 "enable_zerocopy_send_client": false, 00:05:24.451 "zerocopy_threshold": 0, 00:05:24.451 "tls_version": 0, 00:05:24.451 "enable_ktls": false 00:05:24.451 } 00:05:24.451 } 00:05:24.451 ] 00:05:24.451 }, 00:05:24.451 { 00:05:24.451 "subsystem": "vmd", 00:05:24.451 "config": [] 00:05:24.451 }, 00:05:24.451 { 00:05:24.451 "subsystem": "accel", 00:05:24.451 "config": [ 00:05:24.451 { 00:05:24.451 "method": "accel_set_options", 00:05:24.451 "params": { 00:05:24.451 "small_cache_size": 128, 00:05:24.451 "large_cache_size": 16, 00:05:24.451 "task_count": 2048, 00:05:24.451 "sequence_count": 2048, 00:05:24.451 "buf_count": 2048 00:05:24.451 } 00:05:24.451 } 00:05:24.451 ] 00:05:24.451 }, 00:05:24.451 { 00:05:24.451 "subsystem": "bdev", 00:05:24.451 "config": [ 00:05:24.451 { 00:05:24.451 "method": "bdev_set_options", 00:05:24.451 "params": { 00:05:24.451 "bdev_io_pool_size": 65535, 00:05:24.451 "bdev_io_cache_size": 256, 00:05:24.451 "bdev_auto_examine": true, 00:05:24.451 "iobuf_small_cache_size": 128, 00:05:24.451 "iobuf_large_cache_size": 16 00:05:24.451 } 00:05:24.451 }, 00:05:24.451 { 00:05:24.451 "method": "bdev_raid_set_options", 00:05:24.451 "params": { 00:05:24.451 "process_window_size_kb": 1024, 00:05:24.451 "process_max_bandwidth_mb_sec": 0 00:05:24.451 } 00:05:24.451 }, 00:05:24.451 { 00:05:24.451 "method": "bdev_iscsi_set_options", 00:05:24.451 "params": { 00:05:24.451 "timeout_sec": 30 00:05:24.451 } 00:05:24.451 }, 00:05:24.451 { 00:05:24.451 "method": "bdev_nvme_set_options", 00:05:24.451 "params": { 00:05:24.451 "action_on_timeout": "none", 00:05:24.451 "timeout_us": 0, 00:05:24.451 "timeout_admin_us": 0, 00:05:24.451 "keep_alive_timeout_ms": 10000, 00:05:24.451 "arbitration_burst": 0, 00:05:24.451 "low_priority_weight": 0, 00:05:24.451 "medium_priority_weight": 0, 00:05:24.451 "high_priority_weight": 0, 00:05:24.451 "nvme_adminq_poll_period_us": 10000, 00:05:24.451 "nvme_ioq_poll_period_us": 0, 00:05:24.451 "io_queue_requests": 0, 00:05:24.451 "delay_cmd_submit": true, 00:05:24.451 "transport_retry_count": 4, 00:05:24.451 "bdev_retry_count": 3, 00:05:24.451 "transport_ack_timeout": 0, 00:05:24.451 "ctrlr_loss_timeout_sec": 0, 00:05:24.451 "reconnect_delay_sec": 0, 00:05:24.451 "fast_io_fail_timeout_sec": 0, 00:05:24.451 "disable_auto_failback": false, 00:05:24.451 "generate_uuids": false, 00:05:24.451 "transport_tos": 0, 00:05:24.451 "nvme_error_stat": false, 00:05:24.451 "rdma_srq_size": 0, 00:05:24.451 "io_path_stat": false, 00:05:24.451 "allow_accel_sequence": false, 00:05:24.451 "rdma_max_cq_size": 0, 00:05:24.451 "rdma_cm_event_timeout_ms": 0, 00:05:24.451 "dhchap_digests": [ 00:05:24.451 "sha256", 00:05:24.451 "sha384", 00:05:24.451 "sha512" 00:05:24.451 ], 00:05:24.451 "dhchap_dhgroups": [ 00:05:24.451 "null", 00:05:24.451 "ffdhe2048", 00:05:24.451 "ffdhe3072", 00:05:24.451 "ffdhe4096", 00:05:24.451 "ffdhe6144", 00:05:24.451 "ffdhe8192" 00:05:24.451 ] 00:05:24.451 } 00:05:24.451 }, 00:05:24.451 { 00:05:24.451 "method": "bdev_nvme_set_hotplug", 00:05:24.451 "params": { 00:05:24.451 "period_us": 100000, 00:05:24.451 "enable": false 00:05:24.451 } 00:05:24.451 }, 00:05:24.451 { 00:05:24.451 "method": "bdev_wait_for_examine" 00:05:24.451 } 00:05:24.451 ] 00:05:24.451 }, 00:05:24.451 { 00:05:24.451 "subsystem": "scsi", 00:05:24.451 "config": null 00:05:24.451 }, 00:05:24.451 { 00:05:24.451 "subsystem": "scheduler", 00:05:24.451 "config": [ 00:05:24.451 { 00:05:24.451 "method": "framework_set_scheduler", 00:05:24.451 "params": { 00:05:24.451 "name": "static" 00:05:24.451 } 00:05:24.451 } 00:05:24.451 ] 00:05:24.451 }, 00:05:24.451 { 00:05:24.451 "subsystem": "vhost_scsi", 00:05:24.451 "config": [] 00:05:24.451 }, 00:05:24.451 { 00:05:24.451 "subsystem": "vhost_blk", 00:05:24.451 "config": [] 00:05:24.451 }, 00:05:24.451 { 00:05:24.451 "subsystem": "ublk", 00:05:24.451 "config": [] 00:05:24.451 }, 00:05:24.451 { 00:05:24.451 "subsystem": "nbd", 00:05:24.451 "config": [] 00:05:24.451 }, 00:05:24.451 { 00:05:24.451 "subsystem": "nvmf", 00:05:24.451 "config": [ 00:05:24.451 { 00:05:24.451 "method": "nvmf_set_config", 00:05:24.451 "params": { 00:05:24.451 "discovery_filter": "match_any", 00:05:24.451 "admin_cmd_passthru": { 00:05:24.451 "identify_ctrlr": false 00:05:24.451 }, 00:05:24.451 "dhchap_digests": [ 00:05:24.451 "sha256", 00:05:24.451 "sha384", 00:05:24.451 "sha512" 00:05:24.451 ], 00:05:24.451 "dhchap_dhgroups": [ 00:05:24.451 "null", 00:05:24.451 "ffdhe2048", 00:05:24.451 "ffdhe3072", 00:05:24.451 "ffdhe4096", 00:05:24.451 "ffdhe6144", 00:05:24.451 "ffdhe8192" 00:05:24.451 ] 00:05:24.451 } 00:05:24.451 }, 00:05:24.451 { 00:05:24.451 "method": "nvmf_set_max_subsystems", 00:05:24.451 "params": { 00:05:24.452 "max_subsystems": 1024 00:05:24.452 } 00:05:24.452 }, 00:05:24.452 { 00:05:24.452 "method": "nvmf_set_crdt", 00:05:24.452 "params": { 00:05:24.452 "crdt1": 0, 00:05:24.452 "crdt2": 0, 00:05:24.452 "crdt3": 0 00:05:24.452 } 00:05:24.452 }, 00:05:24.452 { 00:05:24.452 "method": "nvmf_create_transport", 00:05:24.452 "params": { 00:05:24.452 "trtype": "TCP", 00:05:24.452 "max_queue_depth": 128, 00:05:24.452 "max_io_qpairs_per_ctrlr": 127, 00:05:24.452 "in_capsule_data_size": 4096, 00:05:24.452 "max_io_size": 131072, 00:05:24.452 "io_unit_size": 131072, 00:05:24.452 "max_aq_depth": 128, 00:05:24.452 "num_shared_buffers": 511, 00:05:24.452 "buf_cache_size": 4294967295, 00:05:24.452 "dif_insert_or_strip": false, 00:05:24.452 "zcopy": false, 00:05:24.452 "c2h_success": true, 00:05:24.452 "sock_priority": 0, 00:05:24.452 "abort_timeout_sec": 1, 00:05:24.452 "ack_timeout": 0, 00:05:24.452 "data_wr_pool_size": 0 00:05:24.452 } 00:05:24.452 } 00:05:24.452 ] 00:05:24.452 }, 00:05:24.452 { 00:05:24.452 "subsystem": "iscsi", 00:05:24.452 "config": [ 00:05:24.452 { 00:05:24.452 "method": "iscsi_set_options", 00:05:24.452 "params": { 00:05:24.452 "node_base": "iqn.2016-06.io.spdk", 00:05:24.452 "max_sessions": 128, 00:05:24.452 "max_connections_per_session": 2, 00:05:24.452 "max_queue_depth": 64, 00:05:24.452 "default_time2wait": 2, 00:05:24.452 "default_time2retain": 20, 00:05:24.452 "first_burst_length": 8192, 00:05:24.452 "immediate_data": true, 00:05:24.452 "allow_duplicated_isid": false, 00:05:24.452 "error_recovery_level": 0, 00:05:24.452 "nop_timeout": 60, 00:05:24.452 "nop_in_interval": 30, 00:05:24.452 "disable_chap": false, 00:05:24.452 "require_chap": false, 00:05:24.452 "mutual_chap": false, 00:05:24.452 "chap_group": 0, 00:05:24.452 "max_large_datain_per_connection": 64, 00:05:24.452 "max_r2t_per_connection": 4, 00:05:24.452 "pdu_pool_size": 36864, 00:05:24.452 "immediate_data_pool_size": 16384, 00:05:24.452 "data_out_pool_size": 2048 00:05:24.452 } 00:05:24.452 } 00:05:24.452 ] 00:05:24.452 } 00:05:24.452 ] 00:05:24.452 } 00:05:24.452 01:46:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:24.452 01:46:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69284 00:05:24.452 01:46:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69284 ']' 00:05:24.452 01:46:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69284 00:05:24.452 01:46:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:24.452 01:46:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.452 01:46:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69284 00:05:24.452 01:46:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.452 01:46:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.452 01:46:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69284' 00:05:24.452 killing process with pid 69284 00:05:24.452 01:46:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69284 00:05:24.452 01:46:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69284 00:05:24.711 01:46:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69298 00:05:24.711 01:46:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:24.711 01:46:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:29.996 01:46:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69298 00:05:29.996 01:46:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69298 ']' 00:05:29.996 01:46:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69298 00:05:29.996 01:46:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:29.996 01:46:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.996 01:46:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69298 00:05:29.996 01:46:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.996 killing process with pid 69298 00:05:29.996 01:46:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.996 01:46:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69298' 00:05:29.996 01:46:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69298 00:05:29.996 01:46:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69298 00:05:29.996 01:46:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:29.996 01:46:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:29.996 00:05:29.996 real 0m6.066s 00:05:29.996 user 0m5.852s 00:05:29.996 sys 0m0.396s 00:05:29.996 01:46:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.996 ************************************ 00:05:29.996 END TEST skip_rpc_with_json 00:05:29.996 ************************************ 00:05:29.996 01:46:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:29.996 01:46:40 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:29.996 01:46:40 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.996 01:46:40 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.996 01:46:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.996 ************************************ 00:05:29.996 START TEST skip_rpc_with_delay 00:05:29.996 ************************************ 00:05:29.996 01:46:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:29.996 01:46:40 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:29.996 01:46:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:29.996 01:46:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:29.996 01:46:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:29.996 01:46:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:29.996 01:46:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:29.996 01:46:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:29.996 01:46:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:29.996 01:46:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:29.996 01:46:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:29.996 01:46:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:29.996 01:46:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:29.996 [2024-11-19 01:46:40.482532] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:29.996 01:46:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:29.996 01:46:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:29.996 01:46:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:29.996 01:46:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:29.996 00:05:29.996 real 0m0.086s 00:05:29.996 user 0m0.058s 00:05:29.996 sys 0m0.027s 00:05:29.996 01:46:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.996 ************************************ 00:05:29.996 END TEST skip_rpc_with_delay 00:05:29.996 ************************************ 00:05:29.996 01:46:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:29.996 01:46:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:29.996 01:46:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:29.996 01:46:40 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:29.996 01:46:40 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.996 01:46:40 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.996 01:46:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.996 ************************************ 00:05:29.996 START TEST exit_on_failed_rpc_init 00:05:29.996 ************************************ 00:05:29.996 01:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:29.996 01:46:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69408 00:05:29.996 01:46:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69408 00:05:29.996 01:46:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.996 01:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 69408 ']' 00:05:29.997 01:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.997 01:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.997 01:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.997 01:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.997 01:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:30.281 [2024-11-19 01:46:40.621353] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:05:30.282 [2024-11-19 01:46:40.621458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69408 ] 00:05:30.282 [2024-11-19 01:46:40.765990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.282 [2024-11-19 01:46:40.784301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.282 [2024-11-19 01:46:40.817686] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:30.551 01:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.551 01:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:30.551 01:46:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.551 01:46:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:30.551 01:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:30.551 01:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:30.551 01:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.551 01:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:30.551 01:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.551 01:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:30.551 01:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.551 01:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:30.551 01:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.551 01:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:30.551 01:46:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:30.551 [2024-11-19 01:46:41.008385] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:05:30.551 [2024-11-19 01:46:41.008530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69418 ] 00:05:30.551 [2024-11-19 01:46:41.159694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.810 [2024-11-19 01:46:41.183459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.810 [2024-11-19 01:46:41.183591] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:30.810 [2024-11-19 01:46:41.183609] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:30.810 [2024-11-19 01:46:41.183620] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:30.810 01:46:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:30.810 01:46:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:30.810 01:46:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:30.810 01:46:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:30.810 01:46:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:30.810 01:46:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:30.810 01:46:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:30.810 01:46:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69408 00:05:30.810 01:46:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 69408 ']' 00:05:30.810 01:46:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 69408 00:05:30.810 01:46:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:30.810 01:46:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.810 01:46:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69408 00:05:30.810 01:46:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.810 killing process with pid 69408 00:05:30.810 01:46:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.810 01:46:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69408' 00:05:30.810 01:46:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 69408 00:05:30.810 01:46:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 69408 00:05:31.069 00:05:31.069 real 0m0.918s 00:05:31.069 user 0m1.066s 00:05:31.069 sys 0m0.259s 00:05:31.069 ************************************ 00:05:31.069 END TEST exit_on_failed_rpc_init 00:05:31.069 ************************************ 00:05:31.069 01:46:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.069 01:46:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:31.069 01:46:41 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:31.069 00:05:31.069 real 0m12.740s 00:05:31.069 user 0m12.156s 00:05:31.069 sys 0m1.076s 00:05:31.069 01:46:41 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.069 ************************************ 00:05:31.069 END TEST skip_rpc 00:05:31.069 ************************************ 00:05:31.069 01:46:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.069 01:46:41 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:31.069 01:46:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.069 01:46:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.069 01:46:41 -- common/autotest_common.sh@10 -- # set +x 00:05:31.069 ************************************ 00:05:31.069 START TEST rpc_client 00:05:31.069 ************************************ 00:05:31.069 01:46:41 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:31.069 * Looking for test storage... 00:05:31.069 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:31.069 01:46:41 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:31.069 01:46:41 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:31.069 01:46:41 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:31.328 01:46:41 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:31.328 01:46:41 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.328 01:46:41 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.328 01:46:41 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.328 01:46:41 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.328 01:46:41 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.328 01:46:41 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.328 01:46:41 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.328 01:46:41 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.328 01:46:41 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.328 01:46:41 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.328 01:46:41 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.328 01:46:41 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:31.328 01:46:41 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:31.328 01:46:41 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.328 01:46:41 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.328 01:46:41 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:31.328 01:46:41 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:31.328 01:46:41 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.328 01:46:41 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:31.328 01:46:41 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.328 01:46:41 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:31.328 01:46:41 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:31.328 01:46:41 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.328 01:46:41 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:31.328 01:46:41 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.328 01:46:41 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.328 01:46:41 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.328 01:46:41 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:31.328 01:46:41 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.328 01:46:41 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:31.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.328 --rc genhtml_branch_coverage=1 00:05:31.328 --rc genhtml_function_coverage=1 00:05:31.328 --rc genhtml_legend=1 00:05:31.328 --rc geninfo_all_blocks=1 00:05:31.328 --rc geninfo_unexecuted_blocks=1 00:05:31.328 00:05:31.328 ' 00:05:31.328 01:46:41 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:31.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.328 --rc genhtml_branch_coverage=1 00:05:31.328 --rc genhtml_function_coverage=1 00:05:31.328 --rc genhtml_legend=1 00:05:31.328 --rc geninfo_all_blocks=1 00:05:31.328 --rc geninfo_unexecuted_blocks=1 00:05:31.328 00:05:31.328 ' 00:05:31.328 01:46:41 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:31.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.328 --rc genhtml_branch_coverage=1 00:05:31.328 --rc genhtml_function_coverage=1 00:05:31.328 --rc genhtml_legend=1 00:05:31.328 --rc geninfo_all_blocks=1 00:05:31.328 --rc geninfo_unexecuted_blocks=1 00:05:31.328 00:05:31.328 ' 00:05:31.328 01:46:41 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:31.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.328 --rc genhtml_branch_coverage=1 00:05:31.328 --rc genhtml_function_coverage=1 00:05:31.328 --rc genhtml_legend=1 00:05:31.328 --rc geninfo_all_blocks=1 00:05:31.328 --rc geninfo_unexecuted_blocks=1 00:05:31.328 00:05:31.328 ' 00:05:31.328 01:46:41 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:31.328 OK 00:05:31.328 01:46:41 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:31.328 00:05:31.328 real 0m0.207s 00:05:31.328 user 0m0.127s 00:05:31.328 sys 0m0.088s 00:05:31.328 01:46:41 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.328 01:46:41 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:31.328 ************************************ 00:05:31.328 END TEST rpc_client 00:05:31.328 ************************************ 00:05:31.328 01:46:41 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:31.328 01:46:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.328 01:46:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.328 01:46:41 -- common/autotest_common.sh@10 -- # set +x 00:05:31.328 ************************************ 00:05:31.328 START TEST json_config 00:05:31.328 ************************************ 00:05:31.328 01:46:41 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:31.328 01:46:41 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:31.328 01:46:41 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:31.328 01:46:41 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:31.588 01:46:41 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:31.588 01:46:41 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.588 01:46:41 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.588 01:46:41 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.588 01:46:41 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.588 01:46:41 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.588 01:46:41 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.588 01:46:41 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.588 01:46:41 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.588 01:46:41 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.588 01:46:41 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.588 01:46:41 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.588 01:46:41 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:31.588 01:46:41 json_config -- scripts/common.sh@345 -- # : 1 00:05:31.588 01:46:41 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.588 01:46:41 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.588 01:46:41 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:31.588 01:46:41 json_config -- scripts/common.sh@353 -- # local d=1 00:05:31.588 01:46:41 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.588 01:46:41 json_config -- scripts/common.sh@355 -- # echo 1 00:05:31.588 01:46:41 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.588 01:46:41 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:31.588 01:46:41 json_config -- scripts/common.sh@353 -- # local d=2 00:05:31.588 01:46:41 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.588 01:46:41 json_config -- scripts/common.sh@355 -- # echo 2 00:05:31.588 01:46:41 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.588 01:46:41 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.588 01:46:41 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.588 01:46:41 json_config -- scripts/common.sh@368 -- # return 0 00:05:31.588 01:46:41 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.588 01:46:41 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:31.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.588 --rc genhtml_branch_coverage=1 00:05:31.588 --rc genhtml_function_coverage=1 00:05:31.588 --rc genhtml_legend=1 00:05:31.588 --rc geninfo_all_blocks=1 00:05:31.588 --rc geninfo_unexecuted_blocks=1 00:05:31.588 00:05:31.588 ' 00:05:31.588 01:46:41 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:31.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.588 --rc genhtml_branch_coverage=1 00:05:31.588 --rc genhtml_function_coverage=1 00:05:31.588 --rc genhtml_legend=1 00:05:31.588 --rc geninfo_all_blocks=1 00:05:31.588 --rc geninfo_unexecuted_blocks=1 00:05:31.588 00:05:31.588 ' 00:05:31.588 01:46:41 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:31.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.588 --rc genhtml_branch_coverage=1 00:05:31.588 --rc genhtml_function_coverage=1 00:05:31.588 --rc genhtml_legend=1 00:05:31.588 --rc geninfo_all_blocks=1 00:05:31.588 --rc geninfo_unexecuted_blocks=1 00:05:31.588 00:05:31.588 ' 00:05:31.588 01:46:41 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:31.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.588 --rc genhtml_branch_coverage=1 00:05:31.588 --rc genhtml_function_coverage=1 00:05:31.588 --rc genhtml_legend=1 00:05:31.588 --rc geninfo_all_blocks=1 00:05:31.588 --rc geninfo_unexecuted_blocks=1 00:05:31.588 00:05:31.588 ' 00:05:31.588 01:46:41 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:31.588 01:46:41 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:31.588 01:46:41 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:31.588 01:46:41 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:31.588 01:46:41 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:31.588 01:46:41 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:31.588 01:46:41 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:31.588 01:46:41 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:31.588 01:46:41 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:31.588 01:46:41 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:31.588 01:46:41 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:31.588 01:46:41 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:31.588 01:46:41 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:05:31.588 01:46:41 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:05:31.588 01:46:41 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:31.588 01:46:41 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:31.588 01:46:41 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:31.588 01:46:41 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:31.588 01:46:41 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:31.588 01:46:41 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:31.588 01:46:41 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:31.588 01:46:41 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:31.588 01:46:41 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:31.588 01:46:41 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.588 01:46:41 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.588 01:46:41 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.588 01:46:41 json_config -- paths/export.sh@5 -- # export PATH 00:05:31.588 01:46:41 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.588 01:46:42 json_config -- nvmf/common.sh@51 -- # : 0 00:05:31.588 01:46:42 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:31.588 01:46:42 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:31.588 01:46:42 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:31.588 01:46:42 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:31.588 01:46:42 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:31.588 01:46:42 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:31.588 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:31.588 01:46:42 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:31.588 01:46:42 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:31.588 01:46:42 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:31.588 01:46:42 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:31.588 01:46:42 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:31.588 01:46:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:31.588 01:46:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:31.588 01:46:42 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:31.588 01:46:42 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:31.588 01:46:42 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:31.589 01:46:42 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:31.589 01:46:42 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:31.589 01:46:42 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:31.589 01:46:42 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:31.589 01:46:42 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:31.589 01:46:42 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:31.589 01:46:42 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:31.589 01:46:42 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:31.589 01:46:42 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:31.589 INFO: JSON configuration test init 00:05:31.589 01:46:42 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:31.589 01:46:42 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:31.589 01:46:42 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:31.589 01:46:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.589 01:46:42 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:31.589 01:46:42 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:31.589 01:46:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.589 Waiting for target to run... 00:05:31.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:31.589 01:46:42 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:31.589 01:46:42 json_config -- json_config/common.sh@9 -- # local app=target 00:05:31.589 01:46:42 json_config -- json_config/common.sh@10 -- # shift 00:05:31.589 01:46:42 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:31.589 01:46:42 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:31.589 01:46:42 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:31.589 01:46:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:31.589 01:46:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:31.589 01:46:42 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=69552 00:05:31.589 01:46:42 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:31.589 01:46:42 json_config -- json_config/common.sh@25 -- # waitforlisten 69552 /var/tmp/spdk_tgt.sock 00:05:31.589 01:46:42 json_config -- common/autotest_common.sh@835 -- # '[' -z 69552 ']' 00:05:31.589 01:46:42 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:31.589 01:46:42 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:31.589 01:46:42 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.589 01:46:42 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:31.589 01:46:42 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.589 01:46:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.589 [2024-11-19 01:46:42.075086] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:05:31.589 [2024-11-19 01:46:42.075363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69552 ] 00:05:31.847 [2024-11-19 01:46:42.352820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.847 [2024-11-19 01:46:42.364699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.783 01:46:43 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.783 01:46:43 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:32.783 01:46:43 json_config -- json_config/common.sh@26 -- # echo '' 00:05:32.783 00:05:32.783 01:46:43 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:32.783 01:46:43 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:32.783 01:46:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:32.783 01:46:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.783 01:46:43 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:32.783 01:46:43 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:32.783 01:46:43 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:32.783 01:46:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.783 01:46:43 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:32.783 01:46:43 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:32.783 01:46:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:33.043 [2024-11-19 01:46:43.486762] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:33.043 01:46:43 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:33.043 01:46:43 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:33.043 01:46:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:33.043 01:46:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.302 01:46:43 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:33.302 01:46:43 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:33.302 01:46:43 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:33.302 01:46:43 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:33.302 01:46:43 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:33.302 01:46:43 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:33.302 01:46:43 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:33.302 01:46:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:33.561 01:46:43 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:33.561 01:46:43 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:33.561 01:46:43 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:33.561 01:46:43 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:33.561 01:46:43 json_config -- json_config/json_config.sh@54 -- # sort 00:05:33.561 01:46:43 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:33.561 01:46:43 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:33.561 01:46:43 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:33.561 01:46:43 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:33.561 01:46:43 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:33.561 01:46:43 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:33.561 01:46:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.561 01:46:43 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:33.561 01:46:43 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:33.561 01:46:43 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:33.561 01:46:43 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:33.561 01:46:43 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:33.561 01:46:43 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:33.561 01:46:43 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:33.561 01:46:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:33.561 01:46:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.561 01:46:43 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:33.561 01:46:43 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:33.561 01:46:43 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:33.561 01:46:43 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:33.561 01:46:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:33.820 MallocForNvmf0 00:05:33.820 01:46:44 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:33.820 01:46:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:34.078 MallocForNvmf1 00:05:34.078 01:46:44 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:34.078 01:46:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:34.337 [2024-11-19 01:46:44.723728] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:34.337 01:46:44 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:34.337 01:46:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:34.595 01:46:45 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:34.595 01:46:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:34.853 01:46:45 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:34.854 01:46:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:35.112 01:46:45 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:35.112 01:46:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:35.370 [2024-11-19 01:46:45.764290] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:35.370 01:46:45 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:35.370 01:46:45 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:35.370 01:46:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.370 01:46:45 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:35.370 01:46:45 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:35.370 01:46:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.370 01:46:45 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:35.370 01:46:45 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:35.370 01:46:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:35.629 MallocBdevForConfigChangeCheck 00:05:35.629 01:46:46 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:35.629 01:46:46 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:35.629 01:46:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.629 01:46:46 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:35.629 01:46:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:36.195 INFO: shutting down applications... 00:05:36.195 01:46:46 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:36.195 01:46:46 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:36.195 01:46:46 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:36.195 01:46:46 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:36.195 01:46:46 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:36.453 Calling clear_iscsi_subsystem 00:05:36.453 Calling clear_nvmf_subsystem 00:05:36.453 Calling clear_nbd_subsystem 00:05:36.453 Calling clear_ublk_subsystem 00:05:36.453 Calling clear_vhost_blk_subsystem 00:05:36.453 Calling clear_vhost_scsi_subsystem 00:05:36.453 Calling clear_bdev_subsystem 00:05:36.453 01:46:46 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:36.453 01:46:46 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:36.453 01:46:46 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:36.453 01:46:46 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:36.453 01:46:46 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:36.453 01:46:46 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:36.711 01:46:47 json_config -- json_config/json_config.sh@352 -- # break 00:05:36.712 01:46:47 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:36.712 01:46:47 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:36.712 01:46:47 json_config -- json_config/common.sh@31 -- # local app=target 00:05:36.712 01:46:47 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:36.712 01:46:47 json_config -- json_config/common.sh@35 -- # [[ -n 69552 ]] 00:05:36.712 01:46:47 json_config -- json_config/common.sh@38 -- # kill -SIGINT 69552 00:05:36.712 01:46:47 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:36.712 01:46:47 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.712 01:46:47 json_config -- json_config/common.sh@41 -- # kill -0 69552 00:05:36.712 01:46:47 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:37.278 01:46:47 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:37.278 SPDK target shutdown done 00:05:37.278 INFO: relaunching applications... 00:05:37.278 01:46:47 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.278 01:46:47 json_config -- json_config/common.sh@41 -- # kill -0 69552 00:05:37.278 01:46:47 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:37.278 01:46:47 json_config -- json_config/common.sh@43 -- # break 00:05:37.278 01:46:47 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:37.278 01:46:47 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:37.278 01:46:47 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:37.278 01:46:47 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:37.278 01:46:47 json_config -- json_config/common.sh@9 -- # local app=target 00:05:37.278 01:46:47 json_config -- json_config/common.sh@10 -- # shift 00:05:37.278 01:46:47 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:37.279 01:46:47 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:37.279 01:46:47 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:37.279 01:46:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:37.279 01:46:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:37.279 01:46:47 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=69748 00:05:37.279 01:46:47 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:37.279 01:46:47 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:37.279 Waiting for target to run... 00:05:37.279 01:46:47 json_config -- json_config/common.sh@25 -- # waitforlisten 69748 /var/tmp/spdk_tgt.sock 00:05:37.279 01:46:47 json_config -- common/autotest_common.sh@835 -- # '[' -z 69748 ']' 00:05:37.279 01:46:47 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:37.279 01:46:47 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.279 01:46:47 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:37.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:37.279 01:46:47 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.279 01:46:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.279 [2024-11-19 01:46:47.861303] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:05:37.279 [2024-11-19 01:46:47.861387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69748 ] 00:05:37.537 [2024-11-19 01:46:48.145146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.795 [2024-11-19 01:46:48.159039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.795 [2024-11-19 01:46:48.287157] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:38.053 [2024-11-19 01:46:48.481943] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:38.053 [2024-11-19 01:46:48.514016] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:38.312 01:46:48 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.312 00:05:38.312 INFO: Checking if target configuration is the same... 00:05:38.312 01:46:48 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:38.312 01:46:48 json_config -- json_config/common.sh@26 -- # echo '' 00:05:38.312 01:46:48 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:38.312 01:46:48 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:38.312 01:46:48 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:38.312 01:46:48 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:38.312 01:46:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:38.312 + '[' 2 -ne 2 ']' 00:05:38.312 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:38.312 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:38.312 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:38.312 +++ basename /dev/fd/62 00:05:38.312 ++ mktemp /tmp/62.XXX 00:05:38.312 + tmp_file_1=/tmp/62.JMw 00:05:38.312 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:38.312 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:38.312 + tmp_file_2=/tmp/spdk_tgt_config.json.0lQ 00:05:38.312 + ret=0 00:05:38.312 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:38.879 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:38.879 + diff -u /tmp/62.JMw /tmp/spdk_tgt_config.json.0lQ 00:05:38.879 INFO: JSON config files are the same 00:05:38.879 + echo 'INFO: JSON config files are the same' 00:05:38.879 + rm /tmp/62.JMw /tmp/spdk_tgt_config.json.0lQ 00:05:38.879 + exit 0 00:05:38.879 INFO: changing configuration and checking if this can be detected... 00:05:38.879 01:46:49 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:38.879 01:46:49 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:38.879 01:46:49 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:38.879 01:46:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:38.879 01:46:49 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:38.879 01:46:49 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:38.879 01:46:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:38.879 + '[' 2 -ne 2 ']' 00:05:39.138 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:39.138 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:39.138 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:39.138 +++ basename /dev/fd/62 00:05:39.138 ++ mktemp /tmp/62.XXX 00:05:39.138 + tmp_file_1=/tmp/62.dkH 00:05:39.138 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:39.138 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:39.138 + tmp_file_2=/tmp/spdk_tgt_config.json.JKd 00:05:39.138 + ret=0 00:05:39.138 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:39.396 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:39.396 + diff -u /tmp/62.dkH /tmp/spdk_tgt_config.json.JKd 00:05:39.396 + ret=1 00:05:39.396 + echo '=== Start of file: /tmp/62.dkH ===' 00:05:39.396 + cat /tmp/62.dkH 00:05:39.396 + echo '=== End of file: /tmp/62.dkH ===' 00:05:39.396 + echo '' 00:05:39.397 + echo '=== Start of file: /tmp/spdk_tgt_config.json.JKd ===' 00:05:39.397 + cat /tmp/spdk_tgt_config.json.JKd 00:05:39.397 + echo '=== End of file: /tmp/spdk_tgt_config.json.JKd ===' 00:05:39.397 + echo '' 00:05:39.397 + rm /tmp/62.dkH /tmp/spdk_tgt_config.json.JKd 00:05:39.397 + exit 1 00:05:39.397 INFO: configuration change detected. 00:05:39.397 01:46:49 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:39.397 01:46:49 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:39.397 01:46:49 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:39.397 01:46:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:39.397 01:46:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.397 01:46:49 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:39.397 01:46:49 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:39.397 01:46:49 json_config -- json_config/json_config.sh@324 -- # [[ -n 69748 ]] 00:05:39.397 01:46:49 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:39.397 01:46:49 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:39.397 01:46:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:39.397 01:46:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.397 01:46:49 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:39.397 01:46:49 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:39.397 01:46:49 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:39.397 01:46:49 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:39.397 01:46:49 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:39.397 01:46:49 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:39.397 01:46:49 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:39.397 01:46:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.656 01:46:50 json_config -- json_config/json_config.sh@330 -- # killprocess 69748 00:05:39.656 01:46:50 json_config -- common/autotest_common.sh@954 -- # '[' -z 69748 ']' 00:05:39.656 01:46:50 json_config -- common/autotest_common.sh@958 -- # kill -0 69748 00:05:39.656 01:46:50 json_config -- common/autotest_common.sh@959 -- # uname 00:05:39.656 01:46:50 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.656 01:46:50 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69748 00:05:39.656 killing process with pid 69748 00:05:39.656 01:46:50 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.656 01:46:50 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.656 01:46:50 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69748' 00:05:39.656 01:46:50 json_config -- common/autotest_common.sh@973 -- # kill 69748 00:05:39.656 01:46:50 json_config -- common/autotest_common.sh@978 -- # wait 69748 00:05:39.656 01:46:50 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:39.656 01:46:50 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:39.656 01:46:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:39.656 01:46:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.656 INFO: Success 00:05:39.656 01:46:50 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:39.656 01:46:50 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:39.656 ************************************ 00:05:39.656 END TEST json_config 00:05:39.656 ************************************ 00:05:39.656 00:05:39.656 real 0m8.393s 00:05:39.656 user 0m12.261s 00:05:39.656 sys 0m1.385s 00:05:39.656 01:46:50 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.656 01:46:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.656 01:46:50 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:39.656 01:46:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.656 01:46:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.656 01:46:50 -- common/autotest_common.sh@10 -- # set +x 00:05:39.656 ************************************ 00:05:39.656 START TEST json_config_extra_key 00:05:39.656 ************************************ 00:05:39.656 01:46:50 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:39.916 01:46:50 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:39.916 01:46:50 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:39.916 01:46:50 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:39.916 01:46:50 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:39.916 01:46:50 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.916 01:46:50 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.916 01:46:50 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.916 01:46:50 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.916 01:46:50 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.916 01:46:50 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.916 01:46:50 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.916 01:46:50 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.916 01:46:50 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.916 01:46:50 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.916 01:46:50 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.916 01:46:50 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:39.916 01:46:50 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:39.916 01:46:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.916 01:46:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.916 01:46:50 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:39.916 01:46:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:39.916 01:46:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.916 01:46:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:39.916 01:46:50 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.916 01:46:50 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:39.916 01:46:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:39.916 01:46:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.916 01:46:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:39.916 01:46:50 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.917 01:46:50 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.917 01:46:50 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.917 01:46:50 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:39.917 01:46:50 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.917 01:46:50 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:39.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.917 --rc genhtml_branch_coverage=1 00:05:39.917 --rc genhtml_function_coverage=1 00:05:39.917 --rc genhtml_legend=1 00:05:39.917 --rc geninfo_all_blocks=1 00:05:39.917 --rc geninfo_unexecuted_blocks=1 00:05:39.917 00:05:39.917 ' 00:05:39.917 01:46:50 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:39.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.917 --rc genhtml_branch_coverage=1 00:05:39.917 --rc genhtml_function_coverage=1 00:05:39.917 --rc genhtml_legend=1 00:05:39.917 --rc geninfo_all_blocks=1 00:05:39.917 --rc geninfo_unexecuted_blocks=1 00:05:39.917 00:05:39.917 ' 00:05:39.917 01:46:50 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:39.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.917 --rc genhtml_branch_coverage=1 00:05:39.917 --rc genhtml_function_coverage=1 00:05:39.917 --rc genhtml_legend=1 00:05:39.917 --rc geninfo_all_blocks=1 00:05:39.917 --rc geninfo_unexecuted_blocks=1 00:05:39.917 00:05:39.917 ' 00:05:39.917 01:46:50 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:39.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.917 --rc genhtml_branch_coverage=1 00:05:39.917 --rc genhtml_function_coverage=1 00:05:39.917 --rc genhtml_legend=1 00:05:39.917 --rc geninfo_all_blocks=1 00:05:39.917 --rc geninfo_unexecuted_blocks=1 00:05:39.917 00:05:39.917 ' 00:05:39.917 01:46:50 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:39.917 01:46:50 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:39.917 01:46:50 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:39.917 01:46:50 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:39.917 01:46:50 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:39.917 01:46:50 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:39.917 01:46:50 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:39.917 01:46:50 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:39.917 01:46:50 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:39.917 01:46:50 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:39.917 01:46:50 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:39.917 01:46:50 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:39.917 01:46:50 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:05:39.917 01:46:50 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:05:39.917 01:46:50 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:39.917 01:46:50 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:39.917 01:46:50 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:39.917 01:46:50 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:39.917 01:46:50 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:39.917 01:46:50 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:39.917 01:46:50 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:39.917 01:46:50 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:39.917 01:46:50 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:39.917 01:46:50 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.917 01:46:50 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.917 01:46:50 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.917 01:46:50 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:39.917 01:46:50 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.917 01:46:50 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:39.917 01:46:50 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:39.917 01:46:50 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:39.917 01:46:50 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:39.917 01:46:50 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:39.917 01:46:50 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:39.917 01:46:50 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:39.917 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:39.917 01:46:50 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:39.917 01:46:50 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:39.917 01:46:50 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:39.917 INFO: launching applications... 00:05:39.917 01:46:50 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:39.917 01:46:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:39.917 01:46:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:39.917 01:46:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:39.917 01:46:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:39.917 01:46:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:39.917 01:46:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:39.917 01:46:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:39.917 01:46:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:39.917 01:46:50 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:39.917 01:46:50 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:39.917 01:46:50 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:39.917 01:46:50 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:39.917 01:46:50 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:39.917 01:46:50 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:39.917 01:46:50 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:39.917 01:46:50 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:39.917 01:46:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:39.917 01:46:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:39.917 Waiting for target to run... 00:05:39.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:39.917 01:46:50 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69896 00:05:39.917 01:46:50 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:39.917 01:46:50 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69896 /var/tmp/spdk_tgt.sock 00:05:39.917 01:46:50 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 69896 ']' 00:05:39.917 01:46:50 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:39.917 01:46:50 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.917 01:46:50 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:39.918 01:46:50 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:39.918 01:46:50 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.918 01:46:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:39.918 [2024-11-19 01:46:50.530565] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:05:39.918 [2024-11-19 01:46:50.530667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69896 ] 00:05:40.486 [2024-11-19 01:46:50.845903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.486 [2024-11-19 01:46:50.857392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.486 [2024-11-19 01:46:50.879894] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:41.055 00:05:41.055 INFO: shutting down applications... 00:05:41.055 01:46:51 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.055 01:46:51 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:41.055 01:46:51 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:41.055 01:46:51 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:41.055 01:46:51 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:41.055 01:46:51 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:41.055 01:46:51 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:41.055 01:46:51 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69896 ]] 00:05:41.055 01:46:51 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69896 00:05:41.055 01:46:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:41.055 01:46:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:41.055 01:46:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69896 00:05:41.055 01:46:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:41.624 01:46:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:41.624 01:46:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:41.624 01:46:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69896 00:05:41.624 01:46:52 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:41.624 01:46:52 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:41.624 01:46:52 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:41.624 01:46:52 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:41.624 SPDK target shutdown done 00:05:41.624 01:46:52 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:41.624 Success 00:05:41.624 00:05:41.624 real 0m1.737s 00:05:41.624 user 0m1.511s 00:05:41.624 sys 0m0.344s 00:05:41.624 01:46:52 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.624 01:46:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:41.624 ************************************ 00:05:41.624 END TEST json_config_extra_key 00:05:41.624 ************************************ 00:05:41.624 01:46:52 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:41.624 01:46:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.624 01:46:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.624 01:46:52 -- common/autotest_common.sh@10 -- # set +x 00:05:41.624 ************************************ 00:05:41.624 START TEST alias_rpc 00:05:41.624 ************************************ 00:05:41.624 01:46:52 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:41.624 * Looking for test storage... 00:05:41.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:41.624 01:46:52 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:41.624 01:46:52 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:41.624 01:46:52 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:41.624 01:46:52 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:41.624 01:46:52 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.624 01:46:52 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.624 01:46:52 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.624 01:46:52 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.624 01:46:52 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.624 01:46:52 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.624 01:46:52 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.624 01:46:52 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.624 01:46:52 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.624 01:46:52 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.624 01:46:52 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.624 01:46:52 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:41.624 01:46:52 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:41.624 01:46:52 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.624 01:46:52 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.624 01:46:52 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:41.624 01:46:52 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:41.624 01:46:52 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.624 01:46:52 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:41.624 01:46:52 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.885 01:46:52 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:41.885 01:46:52 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:41.885 01:46:52 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.885 01:46:52 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:41.885 01:46:52 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.885 01:46:52 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.885 01:46:52 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.885 01:46:52 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:41.885 01:46:52 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.885 01:46:52 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:41.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.885 --rc genhtml_branch_coverage=1 00:05:41.885 --rc genhtml_function_coverage=1 00:05:41.885 --rc genhtml_legend=1 00:05:41.885 --rc geninfo_all_blocks=1 00:05:41.885 --rc geninfo_unexecuted_blocks=1 00:05:41.885 00:05:41.885 ' 00:05:41.885 01:46:52 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:41.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.885 --rc genhtml_branch_coverage=1 00:05:41.885 --rc genhtml_function_coverage=1 00:05:41.885 --rc genhtml_legend=1 00:05:41.885 --rc geninfo_all_blocks=1 00:05:41.885 --rc geninfo_unexecuted_blocks=1 00:05:41.885 00:05:41.885 ' 00:05:41.885 01:46:52 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:41.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.885 --rc genhtml_branch_coverage=1 00:05:41.885 --rc genhtml_function_coverage=1 00:05:41.885 --rc genhtml_legend=1 00:05:41.885 --rc geninfo_all_blocks=1 00:05:41.885 --rc geninfo_unexecuted_blocks=1 00:05:41.885 00:05:41.885 ' 00:05:41.885 01:46:52 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:41.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.885 --rc genhtml_branch_coverage=1 00:05:41.885 --rc genhtml_function_coverage=1 00:05:41.885 --rc genhtml_legend=1 00:05:41.885 --rc geninfo_all_blocks=1 00:05:41.885 --rc geninfo_unexecuted_blocks=1 00:05:41.885 00:05:41.885 ' 00:05:41.885 01:46:52 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:41.885 01:46:52 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69969 00:05:41.885 01:46:52 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:41.885 01:46:52 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69969 00:05:41.885 01:46:52 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 69969 ']' 00:05:41.885 01:46:52 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.885 01:46:52 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.885 01:46:52 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.885 01:46:52 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.885 01:46:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.885 [2024-11-19 01:46:52.319046] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:05:41.885 [2024-11-19 01:46:52.319319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69969 ] 00:05:41.885 [2024-11-19 01:46:52.460202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.885 [2024-11-19 01:46:52.479182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.144 [2024-11-19 01:46:52.517133] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:42.144 01:46:52 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.144 01:46:52 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:42.144 01:46:52 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:42.403 01:46:52 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69969 00:05:42.403 01:46:52 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 69969 ']' 00:05:42.403 01:46:52 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 69969 00:05:42.403 01:46:52 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:42.403 01:46:52 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.403 01:46:52 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69969 00:05:42.403 killing process with pid 69969 00:05:42.403 01:46:52 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.403 01:46:52 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.403 01:46:52 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69969' 00:05:42.403 01:46:52 alias_rpc -- common/autotest_common.sh@973 -- # kill 69969 00:05:42.403 01:46:52 alias_rpc -- common/autotest_common.sh@978 -- # wait 69969 00:05:42.662 ************************************ 00:05:42.662 END TEST alias_rpc 00:05:42.662 ************************************ 00:05:42.662 00:05:42.662 real 0m1.116s 00:05:42.662 user 0m1.340s 00:05:42.662 sys 0m0.287s 00:05:42.662 01:46:53 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.662 01:46:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.662 01:46:53 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:42.662 01:46:53 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:42.662 01:46:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.662 01:46:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.662 01:46:53 -- common/autotest_common.sh@10 -- # set +x 00:05:42.662 ************************************ 00:05:42.662 START TEST spdkcli_tcp 00:05:42.662 ************************************ 00:05:42.662 01:46:53 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:42.922 * Looking for test storage... 00:05:42.922 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:42.922 01:46:53 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:42.922 01:46:53 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:42.922 01:46:53 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:42.922 01:46:53 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:42.922 01:46:53 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.922 01:46:53 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.922 01:46:53 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.922 01:46:53 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.922 01:46:53 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.922 01:46:53 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.922 01:46:53 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.922 01:46:53 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.922 01:46:53 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.922 01:46:53 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.922 01:46:53 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.922 01:46:53 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:42.922 01:46:53 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:42.922 01:46:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.922 01:46:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.922 01:46:53 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:42.922 01:46:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:42.922 01:46:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.922 01:46:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:42.922 01:46:53 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.922 01:46:53 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:42.922 01:46:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:42.922 01:46:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.922 01:46:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:42.922 01:46:53 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.922 01:46:53 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.922 01:46:53 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.922 01:46:53 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:42.922 01:46:53 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.922 01:46:53 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:42.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.922 --rc genhtml_branch_coverage=1 00:05:42.922 --rc genhtml_function_coverage=1 00:05:42.922 --rc genhtml_legend=1 00:05:42.922 --rc geninfo_all_blocks=1 00:05:42.922 --rc geninfo_unexecuted_blocks=1 00:05:42.922 00:05:42.922 ' 00:05:42.922 01:46:53 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:42.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.922 --rc genhtml_branch_coverage=1 00:05:42.922 --rc genhtml_function_coverage=1 00:05:42.922 --rc genhtml_legend=1 00:05:42.922 --rc geninfo_all_blocks=1 00:05:42.922 --rc geninfo_unexecuted_blocks=1 00:05:42.922 00:05:42.922 ' 00:05:42.922 01:46:53 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:42.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.922 --rc genhtml_branch_coverage=1 00:05:42.922 --rc genhtml_function_coverage=1 00:05:42.922 --rc genhtml_legend=1 00:05:42.922 --rc geninfo_all_blocks=1 00:05:42.922 --rc geninfo_unexecuted_blocks=1 00:05:42.922 00:05:42.922 ' 00:05:42.922 01:46:53 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:42.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.922 --rc genhtml_branch_coverage=1 00:05:42.922 --rc genhtml_function_coverage=1 00:05:42.922 --rc genhtml_legend=1 00:05:42.922 --rc geninfo_all_blocks=1 00:05:42.922 --rc geninfo_unexecuted_blocks=1 00:05:42.922 00:05:42.922 ' 00:05:42.922 01:46:53 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:42.922 01:46:53 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:42.922 01:46:53 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:42.922 01:46:53 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:42.922 01:46:53 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:42.922 01:46:53 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:42.923 01:46:53 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:42.923 01:46:53 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:42.923 01:46:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:42.923 01:46:53 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70045 00:05:42.923 01:46:53 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:42.923 01:46:53 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70045 00:05:42.923 01:46:53 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 70045 ']' 00:05:42.923 01:46:53 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.923 01:46:53 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.923 01:46:53 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.923 01:46:53 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.923 01:46:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:42.923 [2024-11-19 01:46:53.490990] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:05:42.923 [2024-11-19 01:46:53.491293] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70045 ] 00:05:43.182 [2024-11-19 01:46:53.638897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:43.182 [2024-11-19 01:46:53.661190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.182 [2024-11-19 01:46:53.661196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.182 [2024-11-19 01:46:53.695446] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:43.441 01:46:53 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.441 01:46:53 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:43.441 01:46:53 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70055 00:05:43.441 01:46:53 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:43.441 01:46:53 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:43.701 [ 00:05:43.701 "bdev_malloc_delete", 00:05:43.701 "bdev_malloc_create", 00:05:43.701 "bdev_null_resize", 00:05:43.701 "bdev_null_delete", 00:05:43.701 "bdev_null_create", 00:05:43.701 "bdev_nvme_cuse_unregister", 00:05:43.701 "bdev_nvme_cuse_register", 00:05:43.701 "bdev_opal_new_user", 00:05:43.701 "bdev_opal_set_lock_state", 00:05:43.701 "bdev_opal_delete", 00:05:43.701 "bdev_opal_get_info", 00:05:43.701 "bdev_opal_create", 00:05:43.701 "bdev_nvme_opal_revert", 00:05:43.701 "bdev_nvme_opal_init", 00:05:43.701 "bdev_nvme_send_cmd", 00:05:43.701 "bdev_nvme_set_keys", 00:05:43.701 "bdev_nvme_get_path_iostat", 00:05:43.701 "bdev_nvme_get_mdns_discovery_info", 00:05:43.701 "bdev_nvme_stop_mdns_discovery", 00:05:43.701 "bdev_nvme_start_mdns_discovery", 00:05:43.701 "bdev_nvme_set_multipath_policy", 00:05:43.701 "bdev_nvme_set_preferred_path", 00:05:43.701 "bdev_nvme_get_io_paths", 00:05:43.701 "bdev_nvme_remove_error_injection", 00:05:43.701 "bdev_nvme_add_error_injection", 00:05:43.701 "bdev_nvme_get_discovery_info", 00:05:43.701 "bdev_nvme_stop_discovery", 00:05:43.701 "bdev_nvme_start_discovery", 00:05:43.701 "bdev_nvme_get_controller_health_info", 00:05:43.701 "bdev_nvme_disable_controller", 00:05:43.701 "bdev_nvme_enable_controller", 00:05:43.701 "bdev_nvme_reset_controller", 00:05:43.701 "bdev_nvme_get_transport_statistics", 00:05:43.701 "bdev_nvme_apply_firmware", 00:05:43.701 "bdev_nvme_detach_controller", 00:05:43.701 "bdev_nvme_get_controllers", 00:05:43.701 "bdev_nvme_attach_controller", 00:05:43.701 "bdev_nvme_set_hotplug", 00:05:43.701 "bdev_nvme_set_options", 00:05:43.701 "bdev_passthru_delete", 00:05:43.701 "bdev_passthru_create", 00:05:43.701 "bdev_lvol_set_parent_bdev", 00:05:43.701 "bdev_lvol_set_parent", 00:05:43.701 "bdev_lvol_check_shallow_copy", 00:05:43.701 "bdev_lvol_start_shallow_copy", 00:05:43.701 "bdev_lvol_grow_lvstore", 00:05:43.701 "bdev_lvol_get_lvols", 00:05:43.701 "bdev_lvol_get_lvstores", 00:05:43.701 "bdev_lvol_delete", 00:05:43.701 "bdev_lvol_set_read_only", 00:05:43.701 "bdev_lvol_resize", 00:05:43.701 "bdev_lvol_decouple_parent", 00:05:43.701 "bdev_lvol_inflate", 00:05:43.701 "bdev_lvol_rename", 00:05:43.701 "bdev_lvol_clone_bdev", 00:05:43.701 "bdev_lvol_clone", 00:05:43.701 "bdev_lvol_snapshot", 00:05:43.701 "bdev_lvol_create", 00:05:43.701 "bdev_lvol_delete_lvstore", 00:05:43.701 "bdev_lvol_rename_lvstore", 00:05:43.701 "bdev_lvol_create_lvstore", 00:05:43.701 "bdev_raid_set_options", 00:05:43.701 "bdev_raid_remove_base_bdev", 00:05:43.701 "bdev_raid_add_base_bdev", 00:05:43.701 "bdev_raid_delete", 00:05:43.701 "bdev_raid_create", 00:05:43.701 "bdev_raid_get_bdevs", 00:05:43.701 "bdev_error_inject_error", 00:05:43.701 "bdev_error_delete", 00:05:43.701 "bdev_error_create", 00:05:43.701 "bdev_split_delete", 00:05:43.701 "bdev_split_create", 00:05:43.701 "bdev_delay_delete", 00:05:43.701 "bdev_delay_create", 00:05:43.701 "bdev_delay_update_latency", 00:05:43.701 "bdev_zone_block_delete", 00:05:43.701 "bdev_zone_block_create", 00:05:43.701 "blobfs_create", 00:05:43.701 "blobfs_detect", 00:05:43.701 "blobfs_set_cache_size", 00:05:43.701 "bdev_aio_delete", 00:05:43.701 "bdev_aio_rescan", 00:05:43.701 "bdev_aio_create", 00:05:43.701 "bdev_ftl_set_property", 00:05:43.701 "bdev_ftl_get_properties", 00:05:43.701 "bdev_ftl_get_stats", 00:05:43.701 "bdev_ftl_unmap", 00:05:43.701 "bdev_ftl_unload", 00:05:43.701 "bdev_ftl_delete", 00:05:43.701 "bdev_ftl_load", 00:05:43.701 "bdev_ftl_create", 00:05:43.701 "bdev_virtio_attach_controller", 00:05:43.701 "bdev_virtio_scsi_get_devices", 00:05:43.701 "bdev_virtio_detach_controller", 00:05:43.701 "bdev_virtio_blk_set_hotplug", 00:05:43.701 "bdev_iscsi_delete", 00:05:43.701 "bdev_iscsi_create", 00:05:43.701 "bdev_iscsi_set_options", 00:05:43.701 "bdev_uring_delete", 00:05:43.701 "bdev_uring_rescan", 00:05:43.701 "bdev_uring_create", 00:05:43.701 "accel_error_inject_error", 00:05:43.701 "ioat_scan_accel_module", 00:05:43.701 "dsa_scan_accel_module", 00:05:43.701 "iaa_scan_accel_module", 00:05:43.701 "keyring_file_remove_key", 00:05:43.701 "keyring_file_add_key", 00:05:43.701 "keyring_linux_set_options", 00:05:43.701 "fsdev_aio_delete", 00:05:43.701 "fsdev_aio_create", 00:05:43.701 "iscsi_get_histogram", 00:05:43.701 "iscsi_enable_histogram", 00:05:43.701 "iscsi_set_options", 00:05:43.701 "iscsi_get_auth_groups", 00:05:43.701 "iscsi_auth_group_remove_secret", 00:05:43.701 "iscsi_auth_group_add_secret", 00:05:43.701 "iscsi_delete_auth_group", 00:05:43.701 "iscsi_create_auth_group", 00:05:43.701 "iscsi_set_discovery_auth", 00:05:43.701 "iscsi_get_options", 00:05:43.701 "iscsi_target_node_request_logout", 00:05:43.701 "iscsi_target_node_set_redirect", 00:05:43.701 "iscsi_target_node_set_auth", 00:05:43.701 "iscsi_target_node_add_lun", 00:05:43.701 "iscsi_get_stats", 00:05:43.701 "iscsi_get_connections", 00:05:43.701 "iscsi_portal_group_set_auth", 00:05:43.702 "iscsi_start_portal_group", 00:05:43.702 "iscsi_delete_portal_group", 00:05:43.702 "iscsi_create_portal_group", 00:05:43.702 "iscsi_get_portal_groups", 00:05:43.702 "iscsi_delete_target_node", 00:05:43.702 "iscsi_target_node_remove_pg_ig_maps", 00:05:43.702 "iscsi_target_node_add_pg_ig_maps", 00:05:43.702 "iscsi_create_target_node", 00:05:43.702 "iscsi_get_target_nodes", 00:05:43.702 "iscsi_delete_initiator_group", 00:05:43.702 "iscsi_initiator_group_remove_initiators", 00:05:43.702 "iscsi_initiator_group_add_initiators", 00:05:43.702 "iscsi_create_initiator_group", 00:05:43.702 "iscsi_get_initiator_groups", 00:05:43.702 "nvmf_set_crdt", 00:05:43.702 "nvmf_set_config", 00:05:43.702 "nvmf_set_max_subsystems", 00:05:43.702 "nvmf_stop_mdns_prr", 00:05:43.702 "nvmf_publish_mdns_prr", 00:05:43.702 "nvmf_subsystem_get_listeners", 00:05:43.702 "nvmf_subsystem_get_qpairs", 00:05:43.702 "nvmf_subsystem_get_controllers", 00:05:43.702 "nvmf_get_stats", 00:05:43.702 "nvmf_get_transports", 00:05:43.702 "nvmf_create_transport", 00:05:43.702 "nvmf_get_targets", 00:05:43.702 "nvmf_delete_target", 00:05:43.702 "nvmf_create_target", 00:05:43.702 "nvmf_subsystem_allow_any_host", 00:05:43.702 "nvmf_subsystem_set_keys", 00:05:43.702 "nvmf_subsystem_remove_host", 00:05:43.702 "nvmf_subsystem_add_host", 00:05:43.702 "nvmf_ns_remove_host", 00:05:43.702 "nvmf_ns_add_host", 00:05:43.702 "nvmf_subsystem_remove_ns", 00:05:43.702 "nvmf_subsystem_set_ns_ana_group", 00:05:43.702 "nvmf_subsystem_add_ns", 00:05:43.702 "nvmf_subsystem_listener_set_ana_state", 00:05:43.702 "nvmf_discovery_get_referrals", 00:05:43.702 "nvmf_discovery_remove_referral", 00:05:43.702 "nvmf_discovery_add_referral", 00:05:43.702 "nvmf_subsystem_remove_listener", 00:05:43.702 "nvmf_subsystem_add_listener", 00:05:43.702 "nvmf_delete_subsystem", 00:05:43.702 "nvmf_create_subsystem", 00:05:43.702 "nvmf_get_subsystems", 00:05:43.702 "env_dpdk_get_mem_stats", 00:05:43.702 "nbd_get_disks", 00:05:43.702 "nbd_stop_disk", 00:05:43.702 "nbd_start_disk", 00:05:43.702 "ublk_recover_disk", 00:05:43.702 "ublk_get_disks", 00:05:43.702 "ublk_stop_disk", 00:05:43.702 "ublk_start_disk", 00:05:43.702 "ublk_destroy_target", 00:05:43.702 "ublk_create_target", 00:05:43.702 "virtio_blk_create_transport", 00:05:43.702 "virtio_blk_get_transports", 00:05:43.702 "vhost_controller_set_coalescing", 00:05:43.702 "vhost_get_controllers", 00:05:43.702 "vhost_delete_controller", 00:05:43.702 "vhost_create_blk_controller", 00:05:43.702 "vhost_scsi_controller_remove_target", 00:05:43.702 "vhost_scsi_controller_add_target", 00:05:43.702 "vhost_start_scsi_controller", 00:05:43.702 "vhost_create_scsi_controller", 00:05:43.702 "thread_set_cpumask", 00:05:43.702 "scheduler_set_options", 00:05:43.702 "framework_get_governor", 00:05:43.702 "framework_get_scheduler", 00:05:43.702 "framework_set_scheduler", 00:05:43.702 "framework_get_reactors", 00:05:43.702 "thread_get_io_channels", 00:05:43.702 "thread_get_pollers", 00:05:43.702 "thread_get_stats", 00:05:43.702 "framework_monitor_context_switch", 00:05:43.702 "spdk_kill_instance", 00:05:43.702 "log_enable_timestamps", 00:05:43.702 "log_get_flags", 00:05:43.702 "log_clear_flag", 00:05:43.702 "log_set_flag", 00:05:43.702 "log_get_level", 00:05:43.702 "log_set_level", 00:05:43.702 "log_get_print_level", 00:05:43.702 "log_set_print_level", 00:05:43.702 "framework_enable_cpumask_locks", 00:05:43.702 "framework_disable_cpumask_locks", 00:05:43.702 "framework_wait_init", 00:05:43.702 "framework_start_init", 00:05:43.702 "scsi_get_devices", 00:05:43.702 "bdev_get_histogram", 00:05:43.702 "bdev_enable_histogram", 00:05:43.702 "bdev_set_qos_limit", 00:05:43.702 "bdev_set_qd_sampling_period", 00:05:43.702 "bdev_get_bdevs", 00:05:43.702 "bdev_reset_iostat", 00:05:43.702 "bdev_get_iostat", 00:05:43.702 "bdev_examine", 00:05:43.702 "bdev_wait_for_examine", 00:05:43.702 "bdev_set_options", 00:05:43.702 "accel_get_stats", 00:05:43.702 "accel_set_options", 00:05:43.702 "accel_set_driver", 00:05:43.702 "accel_crypto_key_destroy", 00:05:43.702 "accel_crypto_keys_get", 00:05:43.702 "accel_crypto_key_create", 00:05:43.702 "accel_assign_opc", 00:05:43.702 "accel_get_module_info", 00:05:43.702 "accel_get_opc_assignments", 00:05:43.702 "vmd_rescan", 00:05:43.702 "vmd_remove_device", 00:05:43.702 "vmd_enable", 00:05:43.702 "sock_get_default_impl", 00:05:43.702 "sock_set_default_impl", 00:05:43.702 "sock_impl_set_options", 00:05:43.702 "sock_impl_get_options", 00:05:43.702 "iobuf_get_stats", 00:05:43.702 "iobuf_set_options", 00:05:43.702 "keyring_get_keys", 00:05:43.702 "framework_get_pci_devices", 00:05:43.702 "framework_get_config", 00:05:43.702 "framework_get_subsystems", 00:05:43.702 "fsdev_set_opts", 00:05:43.702 "fsdev_get_opts", 00:05:43.702 "trace_get_info", 00:05:43.702 "trace_get_tpoint_group_mask", 00:05:43.702 "trace_disable_tpoint_group", 00:05:43.702 "trace_enable_tpoint_group", 00:05:43.702 "trace_clear_tpoint_mask", 00:05:43.702 "trace_set_tpoint_mask", 00:05:43.702 "notify_get_notifications", 00:05:43.702 "notify_get_types", 00:05:43.702 "spdk_get_version", 00:05:43.702 "rpc_get_methods" 00:05:43.702 ] 00:05:43.702 01:46:54 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:43.702 01:46:54 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:43.702 01:46:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:43.702 01:46:54 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:43.702 01:46:54 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70045 00:05:43.702 01:46:54 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 70045 ']' 00:05:43.702 01:46:54 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 70045 00:05:43.702 01:46:54 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:43.702 01:46:54 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.702 01:46:54 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70045 00:05:43.702 killing process with pid 70045 00:05:43.702 01:46:54 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:43.702 01:46:54 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:43.702 01:46:54 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70045' 00:05:43.702 01:46:54 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 70045 00:05:43.702 01:46:54 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 70045 00:05:43.961 ************************************ 00:05:43.961 END TEST spdkcli_tcp 00:05:43.961 ************************************ 00:05:43.961 00:05:43.961 real 0m1.168s 00:05:43.961 user 0m2.057s 00:05:43.961 sys 0m0.344s 00:05:43.961 01:46:54 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.961 01:46:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:43.961 01:46:54 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:43.961 01:46:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.961 01:46:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.961 01:46:54 -- common/autotest_common.sh@10 -- # set +x 00:05:43.961 ************************************ 00:05:43.961 START TEST dpdk_mem_utility 00:05:43.961 ************************************ 00:05:43.961 01:46:54 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:43.961 * Looking for test storage... 00:05:43.961 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:43.961 01:46:54 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:43.961 01:46:54 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:43.961 01:46:54 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:44.220 01:46:54 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:44.220 01:46:54 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.220 01:46:54 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.220 01:46:54 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.220 01:46:54 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.220 01:46:54 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.220 01:46:54 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.220 01:46:54 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.220 01:46:54 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.220 01:46:54 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.220 01:46:54 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.220 01:46:54 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.220 01:46:54 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:44.220 01:46:54 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:44.220 01:46:54 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.220 01:46:54 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.220 01:46:54 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:44.220 01:46:54 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:44.220 01:46:54 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.220 01:46:54 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:44.220 01:46:54 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.221 01:46:54 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:44.221 01:46:54 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:44.221 01:46:54 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.221 01:46:54 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:44.221 01:46:54 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.221 01:46:54 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.221 01:46:54 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.221 01:46:54 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:44.221 01:46:54 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.221 01:46:54 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:44.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.221 --rc genhtml_branch_coverage=1 00:05:44.221 --rc genhtml_function_coverage=1 00:05:44.221 --rc genhtml_legend=1 00:05:44.221 --rc geninfo_all_blocks=1 00:05:44.221 --rc geninfo_unexecuted_blocks=1 00:05:44.221 00:05:44.221 ' 00:05:44.221 01:46:54 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:44.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.221 --rc genhtml_branch_coverage=1 00:05:44.221 --rc genhtml_function_coverage=1 00:05:44.221 --rc genhtml_legend=1 00:05:44.221 --rc geninfo_all_blocks=1 00:05:44.221 --rc geninfo_unexecuted_blocks=1 00:05:44.221 00:05:44.221 ' 00:05:44.221 01:46:54 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:44.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.221 --rc genhtml_branch_coverage=1 00:05:44.221 --rc genhtml_function_coverage=1 00:05:44.221 --rc genhtml_legend=1 00:05:44.221 --rc geninfo_all_blocks=1 00:05:44.221 --rc geninfo_unexecuted_blocks=1 00:05:44.221 00:05:44.221 ' 00:05:44.221 01:46:54 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:44.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.221 --rc genhtml_branch_coverage=1 00:05:44.221 --rc genhtml_function_coverage=1 00:05:44.221 --rc genhtml_legend=1 00:05:44.221 --rc geninfo_all_blocks=1 00:05:44.221 --rc geninfo_unexecuted_blocks=1 00:05:44.221 00:05:44.221 ' 00:05:44.221 01:46:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:44.221 01:46:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:44.221 01:46:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70131 00:05:44.221 01:46:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70131 00:05:44.221 01:46:54 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 70131 ']' 00:05:44.221 01:46:54 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.221 01:46:54 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.221 01:46:54 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.221 01:46:54 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.221 01:46:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:44.221 [2024-11-19 01:46:54.697335] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:05:44.221 [2024-11-19 01:46:54.697652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70131 ] 00:05:44.480 [2024-11-19 01:46:54.844953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.480 [2024-11-19 01:46:54.869740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.480 [2024-11-19 01:46:54.908560] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:44.480 01:46:55 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.480 01:46:55 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:44.480 01:46:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:44.480 01:46:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:44.480 01:46:55 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.480 01:46:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:44.480 { 00:05:44.480 "filename": "/tmp/spdk_mem_dump.txt" 00:05:44.480 } 00:05:44.480 01:46:55 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.480 01:46:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:44.741 DPDK memory size 810.000000 MiB in 1 heap(s) 00:05:44.741 1 heaps totaling size 810.000000 MiB 00:05:44.741 size: 810.000000 MiB heap id: 0 00:05:44.741 end heaps---------- 00:05:44.741 9 mempools totaling size 595.772034 MiB 00:05:44.741 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:44.741 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:44.741 size: 92.545471 MiB name: bdev_io_70131 00:05:44.741 size: 50.003479 MiB name: msgpool_70131 00:05:44.741 size: 36.509338 MiB name: fsdev_io_70131 00:05:44.741 size: 21.763794 MiB name: PDU_Pool 00:05:44.741 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:44.741 size: 4.133484 MiB name: evtpool_70131 00:05:44.741 size: 0.026123 MiB name: Session_Pool 00:05:44.741 end mempools------- 00:05:44.741 6 memzones totaling size 4.142822 MiB 00:05:44.741 size: 1.000366 MiB name: RG_ring_0_70131 00:05:44.741 size: 1.000366 MiB name: RG_ring_1_70131 00:05:44.741 size: 1.000366 MiB name: RG_ring_4_70131 00:05:44.741 size: 1.000366 MiB name: RG_ring_5_70131 00:05:44.741 size: 0.125366 MiB name: RG_ring_2_70131 00:05:44.741 size: 0.015991 MiB name: RG_ring_3_70131 00:05:44.741 end memzones------- 00:05:44.741 01:46:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:44.741 heap id: 0 total size: 810.000000 MiB number of busy elements: 319 number of free elements: 15 00:05:44.741 list of free elements. size: 10.812134 MiB 00:05:44.741 element at address: 0x200018a00000 with size: 0.999878 MiB 00:05:44.741 element at address: 0x200018c00000 with size: 0.999878 MiB 00:05:44.741 element at address: 0x200031800000 with size: 0.994446 MiB 00:05:44.741 element at address: 0x200000400000 with size: 0.993958 MiB 00:05:44.741 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:44.741 element at address: 0x200012c00000 with size: 0.954285 MiB 00:05:44.741 element at address: 0x200018e00000 with size: 0.936584 MiB 00:05:44.741 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:44.741 element at address: 0x20001a600000 with size: 0.566589 MiB 00:05:44.741 element at address: 0x20000a600000 with size: 0.488892 MiB 00:05:44.741 element at address: 0x200000c00000 with size: 0.487000 MiB 00:05:44.741 element at address: 0x200019000000 with size: 0.485657 MiB 00:05:44.741 element at address: 0x200003e00000 with size: 0.480286 MiB 00:05:44.741 element at address: 0x200027a00000 with size: 0.395752 MiB 00:05:44.741 element at address: 0x200000800000 with size: 0.351746 MiB 00:05:44.741 list of standard malloc elements. size: 199.268982 MiB 00:05:44.741 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:44.741 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:44.741 element at address: 0x200018afff80 with size: 1.000122 MiB 00:05:44.741 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:05:44.741 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:44.741 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:44.741 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:05:44.741 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:44.741 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:05:44.741 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:44.741 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:44.741 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:05:44.741 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:05:44.741 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:05:44.741 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:05:44.741 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:05:44.741 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:05:44.741 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:05:44.741 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:05:44.741 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:05:44.741 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:05:44.741 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:05:44.741 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:05:44.741 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:05:44.741 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:44.741 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:44.741 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:44.741 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:44.741 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:44.741 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:44.741 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:44.741 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:44.741 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:44.741 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:44.741 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:44.741 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:44.741 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:44.741 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:44.741 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:05:44.741 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:44.741 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:44.741 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:05:44.741 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:05:44.741 element at address: 0x20000085e580 with size: 0.000183 MiB 00:05:44.741 element at address: 0x20000087e840 with size: 0.000183 MiB 00:05:44.741 element at address: 0x20000087e900 with size: 0.000183 MiB 00:05:44.741 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:05:44.741 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:05:44.741 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:05:44.741 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:05:44.741 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:05:44.741 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:05:44.741 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:05:44.741 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:44.741 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:44.741 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:44.741 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:44.741 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:44.741 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:44.741 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:44.741 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:44.741 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:44.741 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:44.741 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:44.741 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:44.741 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:44.741 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:05:44.741 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:05:44.741 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:05:44.741 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:05:44.741 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:05:44.741 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:05:44.741 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:05:44.741 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:05:44.741 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:05:44.741 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:05:44.741 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:05:44.741 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:05:44.741 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:05:44.741 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:05:44.741 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:05:44.741 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:05:44.741 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:05:44.741 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:05:44.741 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:05:44.741 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:05:44.741 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:05:44.741 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:05:44.741 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:05:44.741 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:44.741 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:44.741 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:44.741 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:44.741 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:44.741 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:44.741 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:44.741 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:44.741 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:44.741 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:44.741 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:44.742 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:05:44.742 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a6910c0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a691180 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a691240 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a691300 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a6913c0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a691480 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a691540 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a691600 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a6916c0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a691780 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a691840 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a691900 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a692080 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a692140 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a692200 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a692380 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a692440 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a692500 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a692680 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a692740 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a692800 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a692980 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a693040 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a693100 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a693280 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a693340 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a693400 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a693580 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a693640 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a693700 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a693880 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a693940 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a694000 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a694180 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a694240 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a694300 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a694480 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a694540 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a694600 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a694780 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a694840 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a694900 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a695080 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a695140 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a695200 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a695380 with size: 0.000183 MiB 00:05:44.742 element at address: 0x20001a695440 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200027a65500 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200027a655c0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200027a6c1c0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200027a6c3c0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200027a6c480 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200027a6c540 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200027a6c600 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:05:44.742 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:05:44.743 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:05:44.743 list of memzone associated elements. size: 599.918884 MiB 00:05:44.743 element at address: 0x20001a695500 with size: 211.416748 MiB 00:05:44.743 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:44.743 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:05:44.743 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:44.743 element at address: 0x200012df4780 with size: 92.045044 MiB 00:05:44.743 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_70131_0 00:05:44.743 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:44.743 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70131_0 00:05:44.743 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:44.743 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70131_0 00:05:44.743 element at address: 0x2000191be940 with size: 20.255554 MiB 00:05:44.743 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:44.743 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:05:44.743 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:44.743 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:44.743 associated memzone info: size: 3.000122 MiB name: MP_evtpool_70131_0 00:05:44.743 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:44.743 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70131 00:05:44.743 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:44.743 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70131 00:05:44.743 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:44.743 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:44.743 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:05:44.743 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:44.743 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:44.743 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:44.743 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:44.743 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:44.743 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:44.743 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70131 00:05:44.743 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:44.743 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70131 00:05:44.743 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:05:44.743 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70131 00:05:44.743 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:05:44.743 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70131 00:05:44.743 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:44.743 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70131 00:05:44.743 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:44.743 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70131 00:05:44.743 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:44.743 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:44.743 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:44.743 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:44.743 element at address: 0x20001907c540 with size: 0.250488 MiB 00:05:44.743 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:44.743 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:44.743 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_70131 00:05:44.743 element at address: 0x20000085e640 with size: 0.125488 MiB 00:05:44.743 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70131 00:05:44.743 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:44.743 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:44.743 element at address: 0x200027a65680 with size: 0.023743 MiB 00:05:44.743 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:44.743 element at address: 0x20000085a380 with size: 0.016113 MiB 00:05:44.743 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70131 00:05:44.743 element at address: 0x200027a6b7c0 with size: 0.002441 MiB 00:05:44.743 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:44.743 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:05:44.743 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70131 00:05:44.743 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:44.743 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70131 00:05:44.743 element at address: 0x20000085a180 with size: 0.000305 MiB 00:05:44.743 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70131 00:05:44.743 element at address: 0x200027a6c280 with size: 0.000305 MiB 00:05:44.743 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:44.743 01:46:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:44.743 01:46:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70131 00:05:44.743 01:46:55 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 70131 ']' 00:05:44.743 01:46:55 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 70131 00:05:44.743 01:46:55 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:44.743 01:46:55 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.743 01:46:55 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70131 00:05:44.743 killing process with pid 70131 00:05:44.743 01:46:55 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:44.743 01:46:55 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:44.743 01:46:55 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70131' 00:05:44.743 01:46:55 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 70131 00:05:44.744 01:46:55 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 70131 00:05:45.003 00:05:45.003 real 0m0.990s 00:05:45.003 user 0m1.017s 00:05:45.003 sys 0m0.327s 00:05:45.003 ************************************ 00:05:45.003 END TEST dpdk_mem_utility 00:05:45.003 ************************************ 00:05:45.003 01:46:55 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.003 01:46:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:45.003 01:46:55 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:45.003 01:46:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.003 01:46:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.003 01:46:55 -- common/autotest_common.sh@10 -- # set +x 00:05:45.003 ************************************ 00:05:45.003 START TEST event 00:05:45.003 ************************************ 00:05:45.003 01:46:55 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:45.003 * Looking for test storage... 00:05:45.003 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:45.003 01:46:55 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:45.003 01:46:55 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:45.003 01:46:55 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:45.263 01:46:55 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:45.263 01:46:55 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.263 01:46:55 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.263 01:46:55 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.263 01:46:55 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.263 01:46:55 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.263 01:46:55 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.263 01:46:55 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.263 01:46:55 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.263 01:46:55 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.263 01:46:55 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.263 01:46:55 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.263 01:46:55 event -- scripts/common.sh@344 -- # case "$op" in 00:05:45.263 01:46:55 event -- scripts/common.sh@345 -- # : 1 00:05:45.263 01:46:55 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.263 01:46:55 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.263 01:46:55 event -- scripts/common.sh@365 -- # decimal 1 00:05:45.263 01:46:55 event -- scripts/common.sh@353 -- # local d=1 00:05:45.263 01:46:55 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.263 01:46:55 event -- scripts/common.sh@355 -- # echo 1 00:05:45.263 01:46:55 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.263 01:46:55 event -- scripts/common.sh@366 -- # decimal 2 00:05:45.263 01:46:55 event -- scripts/common.sh@353 -- # local d=2 00:05:45.263 01:46:55 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.263 01:46:55 event -- scripts/common.sh@355 -- # echo 2 00:05:45.263 01:46:55 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.263 01:46:55 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.263 01:46:55 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.263 01:46:55 event -- scripts/common.sh@368 -- # return 0 00:05:45.263 01:46:55 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.263 01:46:55 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:45.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.263 --rc genhtml_branch_coverage=1 00:05:45.263 --rc genhtml_function_coverage=1 00:05:45.263 --rc genhtml_legend=1 00:05:45.263 --rc geninfo_all_blocks=1 00:05:45.263 --rc geninfo_unexecuted_blocks=1 00:05:45.263 00:05:45.263 ' 00:05:45.263 01:46:55 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:45.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.263 --rc genhtml_branch_coverage=1 00:05:45.263 --rc genhtml_function_coverage=1 00:05:45.263 --rc genhtml_legend=1 00:05:45.263 --rc geninfo_all_blocks=1 00:05:45.263 --rc geninfo_unexecuted_blocks=1 00:05:45.263 00:05:45.263 ' 00:05:45.263 01:46:55 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:45.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.263 --rc genhtml_branch_coverage=1 00:05:45.263 --rc genhtml_function_coverage=1 00:05:45.263 --rc genhtml_legend=1 00:05:45.263 --rc geninfo_all_blocks=1 00:05:45.263 --rc geninfo_unexecuted_blocks=1 00:05:45.263 00:05:45.263 ' 00:05:45.263 01:46:55 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:45.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.263 --rc genhtml_branch_coverage=1 00:05:45.263 --rc genhtml_function_coverage=1 00:05:45.263 --rc genhtml_legend=1 00:05:45.263 --rc geninfo_all_blocks=1 00:05:45.263 --rc geninfo_unexecuted_blocks=1 00:05:45.263 00:05:45.263 ' 00:05:45.263 01:46:55 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:45.263 01:46:55 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:45.263 01:46:55 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:45.263 01:46:55 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:45.263 01:46:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.263 01:46:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.263 ************************************ 00:05:45.263 START TEST event_perf 00:05:45.263 ************************************ 00:05:45.263 01:46:55 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:45.263 Running I/O for 1 seconds...[2024-11-19 01:46:55.689486] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:05:45.263 [2024-11-19 01:46:55.689778] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70203 ] 00:05:45.263 [2024-11-19 01:46:55.836387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:45.263 [2024-11-19 01:46:55.860722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.263 [2024-11-19 01:46:55.860853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.264 [2024-11-19 01:46:55.860989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:45.264 [2024-11-19 01:46:55.860994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.641 Running I/O for 1 seconds... 00:05:46.641 lcore 0: 203583 00:05:46.641 lcore 1: 203583 00:05:46.641 lcore 2: 203582 00:05:46.641 lcore 3: 203583 00:05:46.641 done. 00:05:46.641 00:05:46.641 ************************************ 00:05:46.641 END TEST event_perf 00:05:46.641 real 0m1.238s 00:05:46.641 user 0m4.075s 00:05:46.641 sys 0m0.045s 00:05:46.641 01:46:56 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.641 01:46:56 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:46.641 ************************************ 00:05:46.641 01:46:56 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:46.641 01:46:56 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:46.641 01:46:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.641 01:46:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.641 ************************************ 00:05:46.641 START TEST event_reactor 00:05:46.641 ************************************ 00:05:46.641 01:46:56 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:46.641 [2024-11-19 01:46:56.980617] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:05:46.641 [2024-11-19 01:46:56.980870] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70242 ] 00:05:46.641 [2024-11-19 01:46:57.118258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.641 [2024-11-19 01:46:57.136251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.578 test_start 00:05:47.578 oneshot 00:05:47.578 tick 100 00:05:47.578 tick 100 00:05:47.578 tick 250 00:05:47.578 tick 100 00:05:47.578 tick 100 00:05:47.578 tick 250 00:05:47.578 tick 100 00:05:47.578 tick 500 00:05:47.578 tick 100 00:05:47.578 tick 100 00:05:47.578 tick 250 00:05:47.578 tick 100 00:05:47.578 tick 100 00:05:47.578 test_end 00:05:47.578 ************************************ 00:05:47.578 END TEST event_reactor 00:05:47.578 ************************************ 00:05:47.578 00:05:47.578 real 0m1.211s 00:05:47.578 user 0m1.078s 00:05:47.578 sys 0m0.028s 00:05:47.578 01:46:58 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.578 01:46:58 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:47.837 01:46:58 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:47.837 01:46:58 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:47.837 01:46:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.837 01:46:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.837 ************************************ 00:05:47.837 START TEST event_reactor_perf 00:05:47.837 ************************************ 00:05:47.837 01:46:58 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:47.837 [2024-11-19 01:46:58.242053] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:05:47.837 [2024-11-19 01:46:58.242142] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70272 ] 00:05:47.837 [2024-11-19 01:46:58.385176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.837 [2024-11-19 01:46:58.403408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.215 test_start 00:05:49.215 test_end 00:05:49.215 Performance: 447109 events per second 00:05:49.215 ************************************ 00:05:49.215 END TEST event_reactor_perf 00:05:49.215 ************************************ 00:05:49.215 00:05:49.215 real 0m1.217s 00:05:49.215 user 0m1.073s 00:05:49.215 sys 0m0.038s 00:05:49.215 01:46:59 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.215 01:46:59 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:49.215 01:46:59 event -- event/event.sh@49 -- # uname -s 00:05:49.215 01:46:59 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:49.215 01:46:59 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:49.215 01:46:59 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.215 01:46:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.215 01:46:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.215 ************************************ 00:05:49.215 START TEST event_scheduler 00:05:49.215 ************************************ 00:05:49.215 01:46:59 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:49.215 * Looking for test storage... 00:05:49.215 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:49.215 01:46:59 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:49.215 01:46:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:49.215 01:46:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:49.215 01:46:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:49.215 01:46:59 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.215 01:46:59 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.215 01:46:59 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.215 01:46:59 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.215 01:46:59 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.215 01:46:59 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.215 01:46:59 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.215 01:46:59 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.215 01:46:59 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.215 01:46:59 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.215 01:46:59 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.215 01:46:59 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:49.215 01:46:59 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:49.215 01:46:59 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.215 01:46:59 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.215 01:46:59 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:49.215 01:46:59 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:49.215 01:46:59 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.215 01:46:59 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:49.215 01:46:59 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.215 01:46:59 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:49.215 01:46:59 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:49.215 01:46:59 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.215 01:46:59 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:49.215 01:46:59 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.215 01:46:59 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.215 01:46:59 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.215 01:46:59 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:49.215 01:46:59 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.215 01:46:59 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:49.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.215 --rc genhtml_branch_coverage=1 00:05:49.215 --rc genhtml_function_coverage=1 00:05:49.215 --rc genhtml_legend=1 00:05:49.215 --rc geninfo_all_blocks=1 00:05:49.215 --rc geninfo_unexecuted_blocks=1 00:05:49.215 00:05:49.215 ' 00:05:49.215 01:46:59 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:49.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.215 --rc genhtml_branch_coverage=1 00:05:49.215 --rc genhtml_function_coverage=1 00:05:49.215 --rc genhtml_legend=1 00:05:49.215 --rc geninfo_all_blocks=1 00:05:49.215 --rc geninfo_unexecuted_blocks=1 00:05:49.215 00:05:49.215 ' 00:05:49.215 01:46:59 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:49.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.215 --rc genhtml_branch_coverage=1 00:05:49.215 --rc genhtml_function_coverage=1 00:05:49.215 --rc genhtml_legend=1 00:05:49.215 --rc geninfo_all_blocks=1 00:05:49.215 --rc geninfo_unexecuted_blocks=1 00:05:49.215 00:05:49.215 ' 00:05:49.215 01:46:59 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:49.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.215 --rc genhtml_branch_coverage=1 00:05:49.215 --rc genhtml_function_coverage=1 00:05:49.215 --rc genhtml_legend=1 00:05:49.215 --rc geninfo_all_blocks=1 00:05:49.215 --rc geninfo_unexecuted_blocks=1 00:05:49.215 00:05:49.215 ' 00:05:49.215 01:46:59 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:49.215 01:46:59 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70341 00:05:49.215 01:46:59 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:49.215 01:46:59 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.215 01:46:59 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70341 00:05:49.215 01:46:59 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 70341 ']' 00:05:49.215 01:46:59 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.215 01:46:59 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.215 01:46:59 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.215 01:46:59 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.215 01:46:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.215 [2024-11-19 01:46:59.740301] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:05:49.215 [2024-11-19 01:46:59.740607] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70341 ] 00:05:49.474 [2024-11-19 01:46:59.887556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:49.474 [2024-11-19 01:46:59.914286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.474 [2024-11-19 01:46:59.914414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.474 [2024-11-19 01:46:59.914552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:49.474 [2024-11-19 01:46:59.914557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.474 01:47:00 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.474 01:47:00 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:49.474 01:47:00 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:49.474 01:47:00 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.474 01:47:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.474 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:49.474 POWER: Cannot set governor of lcore 0 to userspace 00:05:49.474 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:49.474 POWER: Cannot set governor of lcore 0 to performance 00:05:49.474 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:49.474 POWER: Cannot set governor of lcore 0 to userspace 00:05:49.474 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:49.474 POWER: Unable to set Power Management Environment for lcore 0 00:05:49.474 [2024-11-19 01:47:00.013118] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:49.474 [2024-11-19 01:47:00.013129] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:49.474 [2024-11-19 01:47:00.013152] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:49.474 [2024-11-19 01:47:00.013167] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:49.474 [2024-11-19 01:47:00.013174] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:49.474 [2024-11-19 01:47:00.013180] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:49.474 01:47:00 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.474 01:47:00 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:49.474 01:47:00 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.474 01:47:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.474 [2024-11-19 01:47:00.048289] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:49.474 [2024-11-19 01:47:00.063791] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:49.474 01:47:00 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.474 01:47:00 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:49.474 01:47:00 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.474 01:47:00 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.474 01:47:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.474 ************************************ 00:05:49.474 START TEST scheduler_create_thread 00:05:49.474 ************************************ 00:05:49.474 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:49.474 01:47:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:49.474 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.474 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.474 2 00:05:49.474 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.474 01:47:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:49.474 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.474 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.733 3 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.733 4 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.733 5 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.733 6 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.733 7 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.733 8 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.733 9 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.733 10 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.733 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.300 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.300 01:47:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:50.300 01:47:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:50.300 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.300 01:47:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.240 01:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.240 ************************************ 00:05:51.240 END TEST scheduler_create_thread 00:05:51.240 ************************************ 00:05:51.240 00:05:51.240 real 0m1.751s 00:05:51.240 user 0m0.015s 00:05:51.240 sys 0m0.007s 00:05:51.240 01:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.240 01:47:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.499 01:47:01 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:51.499 01:47:01 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70341 00:05:51.499 01:47:01 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 70341 ']' 00:05:51.499 01:47:01 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 70341 00:05:51.499 01:47:01 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:51.499 01:47:01 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.499 01:47:01 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70341 00:05:51.499 killing process with pid 70341 00:05:51.499 01:47:01 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:51.499 01:47:01 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:51.499 01:47:01 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70341' 00:05:51.499 01:47:01 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 70341 00:05:51.499 01:47:01 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 70341 00:05:51.758 [2024-11-19 01:47:02.305859] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:52.018 ************************************ 00:05:52.018 END TEST event_scheduler 00:05:52.018 ************************************ 00:05:52.018 00:05:52.018 real 0m2.917s 00:05:52.018 user 0m3.823s 00:05:52.018 sys 0m0.268s 00:05:52.018 01:47:02 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.018 01:47:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:52.018 01:47:02 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:52.018 01:47:02 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:52.018 01:47:02 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.018 01:47:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.018 01:47:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.018 ************************************ 00:05:52.018 START TEST app_repeat 00:05:52.018 ************************************ 00:05:52.018 01:47:02 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:52.018 01:47:02 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.018 01:47:02 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.018 01:47:02 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:52.018 01:47:02 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.018 01:47:02 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:52.018 01:47:02 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:52.018 01:47:02 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:52.018 01:47:02 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70417 00:05:52.018 01:47:02 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.018 01:47:02 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:52.018 Process app_repeat pid: 70417 00:05:52.018 01:47:02 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70417' 00:05:52.018 spdk_app_start Round 0 00:05:52.018 01:47:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:52.018 01:47:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:52.018 01:47:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70417 /var/tmp/spdk-nbd.sock 00:05:52.018 01:47:02 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70417 ']' 00:05:52.018 01:47:02 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:52.018 01:47:02 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:52.018 01:47:02 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:52.018 01:47:02 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.018 01:47:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:52.018 [2024-11-19 01:47:02.505433] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:05:52.018 [2024-11-19 01:47:02.505727] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70417 ] 00:05:52.279 [2024-11-19 01:47:02.652148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.279 [2024-11-19 01:47:02.674951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.279 [2024-11-19 01:47:02.674959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.279 [2024-11-19 01:47:02.703615] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:52.279 01:47:02 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.279 01:47:02 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:52.279 01:47:02 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:52.538 Malloc0 00:05:52.538 01:47:02 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:52.797 Malloc1 00:05:52.797 01:47:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.797 01:47:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.797 01:47:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.797 01:47:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:52.798 01:47:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.798 01:47:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:52.798 01:47:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.798 01:47:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.798 01:47:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.798 01:47:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:52.798 01:47:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.798 01:47:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:52.798 01:47:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:52.798 01:47:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:52.798 01:47:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.798 01:47:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:53.057 /dev/nbd0 00:05:53.057 01:47:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:53.057 01:47:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:53.057 01:47:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:53.057 01:47:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:53.057 01:47:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:53.057 01:47:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:53.057 01:47:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:53.057 01:47:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:53.057 01:47:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:53.057 01:47:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:53.057 01:47:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:53.057 1+0 records in 00:05:53.057 1+0 records out 00:05:53.057 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268075 s, 15.3 MB/s 00:05:53.057 01:47:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:53.057 01:47:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:53.057 01:47:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:53.057 01:47:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:53.057 01:47:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:53.057 01:47:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:53.057 01:47:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.057 01:47:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:53.317 /dev/nbd1 00:05:53.317 01:47:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:53.317 01:47:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:53.317 01:47:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:53.317 01:47:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:53.317 01:47:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:53.317 01:47:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:53.317 01:47:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:53.317 01:47:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:53.317 01:47:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:53.317 01:47:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:53.317 01:47:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:53.317 1+0 records in 00:05:53.317 1+0 records out 00:05:53.317 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030049 s, 13.6 MB/s 00:05:53.317 01:47:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:53.317 01:47:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:53.317 01:47:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:53.317 01:47:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:53.317 01:47:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:53.317 01:47:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:53.317 01:47:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.317 01:47:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.317 01:47:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.318 01:47:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.578 01:47:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:53.578 { 00:05:53.578 "nbd_device": "/dev/nbd0", 00:05:53.578 "bdev_name": "Malloc0" 00:05:53.578 }, 00:05:53.578 { 00:05:53.578 "nbd_device": "/dev/nbd1", 00:05:53.578 "bdev_name": "Malloc1" 00:05:53.578 } 00:05:53.578 ]' 00:05:53.578 01:47:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:53.578 { 00:05:53.578 "nbd_device": "/dev/nbd0", 00:05:53.578 "bdev_name": "Malloc0" 00:05:53.578 }, 00:05:53.578 { 00:05:53.578 "nbd_device": "/dev/nbd1", 00:05:53.578 "bdev_name": "Malloc1" 00:05:53.578 } 00:05:53.578 ]' 00:05:53.578 01:47:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.578 01:47:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:53.578 /dev/nbd1' 00:05:53.578 01:47:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.578 01:47:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:53.578 /dev/nbd1' 00:05:53.578 01:47:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:53.578 01:47:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:53.578 01:47:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:53.578 01:47:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:53.578 01:47:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:53.578 01:47:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.578 01:47:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.578 01:47:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:53.578 01:47:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:53.578 01:47:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:53.578 01:47:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:53.578 256+0 records in 00:05:53.578 256+0 records out 00:05:53.578 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00616452 s, 170 MB/s 00:05:53.578 01:47:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.578 01:47:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:53.578 256+0 records in 00:05:53.578 256+0 records out 00:05:53.578 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247561 s, 42.4 MB/s 00:05:53.578 01:47:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.578 01:47:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:53.837 256+0 records in 00:05:53.837 256+0 records out 00:05:53.837 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0294984 s, 35.5 MB/s 00:05:53.837 01:47:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:53.837 01:47:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.837 01:47:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.837 01:47:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:53.837 01:47:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:53.837 01:47:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:53.837 01:47:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:53.837 01:47:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.837 01:47:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:53.837 01:47:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.837 01:47:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:53.838 01:47:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:53.838 01:47:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:53.838 01:47:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.838 01:47:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.838 01:47:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:53.838 01:47:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:53.838 01:47:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.838 01:47:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:54.097 01:47:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:54.097 01:47:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:54.097 01:47:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:54.097 01:47:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.097 01:47:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.097 01:47:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:54.097 01:47:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:54.097 01:47:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.097 01:47:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.097 01:47:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:54.358 01:47:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:54.358 01:47:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:54.358 01:47:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:54.358 01:47:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.358 01:47:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.358 01:47:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:54.358 01:47:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:54.358 01:47:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.358 01:47:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.358 01:47:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.358 01:47:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.669 01:47:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:54.669 01:47:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:54.669 01:47:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.669 01:47:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:54.669 01:47:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:54.669 01:47:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.669 01:47:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:54.669 01:47:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:54.669 01:47:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:54.669 01:47:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:54.669 01:47:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:54.669 01:47:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:54.669 01:47:05 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:54.958 01:47:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:54.958 [2024-11-19 01:47:05.569316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:55.217 [2024-11-19 01:47:05.588771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.217 [2024-11-19 01:47:05.588783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.217 [2024-11-19 01:47:05.616441] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:55.217 [2024-11-19 01:47:05.616573] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:55.217 [2024-11-19 01:47:05.616588] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:58.507 01:47:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:58.507 spdk_app_start Round 1 00:05:58.507 01:47:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:58.507 01:47:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70417 /var/tmp/spdk-nbd.sock 00:05:58.507 01:47:08 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70417 ']' 00:05:58.507 01:47:08 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:58.507 01:47:08 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:58.507 01:47:08 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:58.507 01:47:08 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.507 01:47:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:58.507 01:47:08 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.507 01:47:08 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:58.507 01:47:08 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.507 Malloc0 00:05:58.507 01:47:08 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.772 Malloc1 00:05:58.772 01:47:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.772 01:47:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.772 01:47:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.772 01:47:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:58.772 01:47:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.772 01:47:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:58.772 01:47:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.772 01:47:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.772 01:47:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.772 01:47:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:58.772 01:47:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.772 01:47:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:58.772 01:47:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:58.772 01:47:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:58.772 01:47:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.772 01:47:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:59.033 /dev/nbd0 00:05:59.033 01:47:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:59.033 01:47:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:59.033 01:47:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:59.033 01:47:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:59.033 01:47:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:59.033 01:47:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:59.033 01:47:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:59.033 01:47:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:59.033 01:47:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:59.033 01:47:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:59.033 01:47:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.033 1+0 records in 00:05:59.033 1+0 records out 00:05:59.033 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254278 s, 16.1 MB/s 00:05:59.033 01:47:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:59.033 01:47:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:59.033 01:47:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:59.033 01:47:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:59.033 01:47:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:59.033 01:47:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.033 01:47:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.033 01:47:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:59.293 /dev/nbd1 00:05:59.293 01:47:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:59.293 01:47:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:59.293 01:47:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:59.293 01:47:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:59.293 01:47:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:59.293 01:47:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:59.293 01:47:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:59.293 01:47:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:59.293 01:47:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:59.293 01:47:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:59.293 01:47:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.293 1+0 records in 00:05:59.293 1+0 records out 00:05:59.293 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293919 s, 13.9 MB/s 00:05:59.293 01:47:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:59.293 01:47:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:59.293 01:47:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:59.293 01:47:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:59.293 01:47:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:59.293 01:47:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.293 01:47:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.293 01:47:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.293 01:47:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.293 01:47:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.552 01:47:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:59.552 { 00:05:59.552 "nbd_device": "/dev/nbd0", 00:05:59.552 "bdev_name": "Malloc0" 00:05:59.552 }, 00:05:59.552 { 00:05:59.552 "nbd_device": "/dev/nbd1", 00:05:59.552 "bdev_name": "Malloc1" 00:05:59.552 } 00:05:59.552 ]' 00:05:59.552 01:47:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:59.552 { 00:05:59.552 "nbd_device": "/dev/nbd0", 00:05:59.552 "bdev_name": "Malloc0" 00:05:59.552 }, 00:05:59.552 { 00:05:59.552 "nbd_device": "/dev/nbd1", 00:05:59.552 "bdev_name": "Malloc1" 00:05:59.552 } 00:05:59.552 ]' 00:05:59.552 01:47:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.811 01:47:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:59.811 /dev/nbd1' 00:05:59.812 01:47:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:59.812 /dev/nbd1' 00:05:59.812 01:47:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.812 01:47:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:59.812 01:47:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:59.812 01:47:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:59.812 01:47:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:59.812 01:47:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:59.812 01:47:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.812 01:47:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.812 01:47:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:59.812 01:47:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:59.812 01:47:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:59.812 01:47:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:59.812 256+0 records in 00:05:59.812 256+0 records out 00:05:59.812 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00743186 s, 141 MB/s 00:05:59.812 01:47:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.812 01:47:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:59.812 256+0 records in 00:05:59.812 256+0 records out 00:05:59.812 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245848 s, 42.7 MB/s 00:05:59.812 01:47:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.812 01:47:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:59.812 256+0 records in 00:05:59.812 256+0 records out 00:05:59.812 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025902 s, 40.5 MB/s 00:05:59.812 01:47:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:59.812 01:47:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.812 01:47:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.812 01:47:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:59.812 01:47:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:59.812 01:47:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:59.812 01:47:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:59.812 01:47:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.812 01:47:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:59.812 01:47:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.812 01:47:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:59.812 01:47:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:59.812 01:47:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:59.812 01:47:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.812 01:47:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.812 01:47:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:59.812 01:47:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:59.812 01:47:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.812 01:47:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:00.071 01:47:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:00.071 01:47:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:00.071 01:47:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:00.071 01:47:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.071 01:47:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.071 01:47:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:00.071 01:47:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.071 01:47:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.071 01:47:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.071 01:47:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:00.330 01:47:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:00.330 01:47:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:00.330 01:47:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:00.330 01:47:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.330 01:47:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.330 01:47:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:00.330 01:47:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.330 01:47:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.330 01:47:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.330 01:47:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.330 01:47:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.589 01:47:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:00.589 01:47:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:00.589 01:47:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.589 01:47:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:00.589 01:47:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:00.589 01:47:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.589 01:47:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:00.589 01:47:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:00.589 01:47:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:00.589 01:47:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:00.589 01:47:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:00.589 01:47:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:00.589 01:47:11 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:00.848 01:47:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:01.108 [2024-11-19 01:47:11.497656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.108 [2024-11-19 01:47:11.515756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.108 [2024-11-19 01:47:11.515761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.108 [2024-11-19 01:47:11.544188] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:01.108 [2024-11-19 01:47:11.544295] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:01.108 [2024-11-19 01:47:11.544309] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:04.394 01:47:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:04.394 spdk_app_start Round 2 00:06:04.394 01:47:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:04.394 01:47:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70417 /var/tmp/spdk-nbd.sock 00:06:04.394 01:47:14 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70417 ']' 00:06:04.394 01:47:14 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.394 01:47:14 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.394 01:47:14 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.394 01:47:14 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.394 01:47:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.394 01:47:14 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.394 01:47:14 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:04.394 01:47:14 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.394 Malloc0 00:06:04.394 01:47:14 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.653 Malloc1 00:06:04.653 01:47:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.653 01:47:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.653 01:47:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.653 01:47:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:04.653 01:47:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.653 01:47:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:04.653 01:47:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.653 01:47:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.653 01:47:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.653 01:47:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:04.653 01:47:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.653 01:47:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:04.653 01:47:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:04.653 01:47:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:04.653 01:47:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.653 01:47:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:04.912 /dev/nbd0 00:06:04.912 01:47:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:04.912 01:47:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:04.912 01:47:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:04.912 01:47:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:04.912 01:47:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:04.912 01:47:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:04.912 01:47:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:04.912 01:47:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:04.912 01:47:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:04.912 01:47:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:04.912 01:47:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.912 1+0 records in 00:06:04.912 1+0 records out 00:06:04.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301942 s, 13.6 MB/s 00:06:04.912 01:47:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.912 01:47:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:04.912 01:47:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:04.912 01:47:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:04.912 01:47:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:04.912 01:47:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.912 01:47:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.912 01:47:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:05.171 /dev/nbd1 00:06:05.171 01:47:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:05.171 01:47:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:05.171 01:47:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:05.171 01:47:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:05.171 01:47:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:05.171 01:47:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:05.171 01:47:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:05.171 01:47:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:05.171 01:47:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:05.171 01:47:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:05.171 01:47:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.171 1+0 records in 00:06:05.171 1+0 records out 00:06:05.171 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341668 s, 12.0 MB/s 00:06:05.171 01:47:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.171 01:47:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:05.171 01:47:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.171 01:47:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:05.171 01:47:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:05.171 01:47:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.171 01:47:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.171 01:47:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.171 01:47:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.171 01:47:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:05.740 { 00:06:05.740 "nbd_device": "/dev/nbd0", 00:06:05.740 "bdev_name": "Malloc0" 00:06:05.740 }, 00:06:05.740 { 00:06:05.740 "nbd_device": "/dev/nbd1", 00:06:05.740 "bdev_name": "Malloc1" 00:06:05.740 } 00:06:05.740 ]' 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:05.740 { 00:06:05.740 "nbd_device": "/dev/nbd0", 00:06:05.740 "bdev_name": "Malloc0" 00:06:05.740 }, 00:06:05.740 { 00:06:05.740 "nbd_device": "/dev/nbd1", 00:06:05.740 "bdev_name": "Malloc1" 00:06:05.740 } 00:06:05.740 ]' 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:05.740 /dev/nbd1' 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:05.740 /dev/nbd1' 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:05.740 256+0 records in 00:06:05.740 256+0 records out 00:06:05.740 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00835315 s, 126 MB/s 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:05.740 256+0 records in 00:06:05.740 256+0 records out 00:06:05.740 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251695 s, 41.7 MB/s 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:05.740 256+0 records in 00:06:05.740 256+0 records out 00:06:05.740 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235885 s, 44.5 MB/s 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.740 01:47:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:06.000 01:47:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:06.000 01:47:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:06.000 01:47:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:06.000 01:47:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.000 01:47:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.000 01:47:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:06.000 01:47:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.000 01:47:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.000 01:47:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.000 01:47:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:06.258 01:47:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:06.258 01:47:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:06.258 01:47:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:06.258 01:47:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.259 01:47:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.259 01:47:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:06.259 01:47:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.259 01:47:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.259 01:47:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.259 01:47:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.259 01:47:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.826 01:47:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:06.826 01:47:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:06.826 01:47:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.826 01:47:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:06.826 01:47:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.826 01:47:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:06.826 01:47:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:06.826 01:47:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:06.826 01:47:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:06.826 01:47:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:06.826 01:47:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:06.826 01:47:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:06.826 01:47:17 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:07.085 01:47:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:07.085 [2024-11-19 01:47:17.590590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.085 [2024-11-19 01:47:17.612025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.085 [2024-11-19 01:47:17.612037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.085 [2024-11-19 01:47:17.643985] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:07.085 [2024-11-19 01:47:17.644093] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:07.085 [2024-11-19 01:47:17.644112] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:10.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.374 01:47:20 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70417 /var/tmp/spdk-nbd.sock 00:06:10.374 01:47:20 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70417 ']' 00:06:10.374 01:47:20 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.374 01:47:20 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.374 01:47:20 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.374 01:47:20 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.374 01:47:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.374 01:47:20 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.374 01:47:20 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:10.374 01:47:20 event.app_repeat -- event/event.sh@39 -- # killprocess 70417 00:06:10.374 01:47:20 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 70417 ']' 00:06:10.374 01:47:20 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 70417 00:06:10.374 01:47:20 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:10.374 01:47:20 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.374 01:47:20 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70417 00:06:10.374 killing process with pid 70417 00:06:10.374 01:47:20 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.374 01:47:20 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.374 01:47:20 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70417' 00:06:10.374 01:47:20 event.app_repeat -- common/autotest_common.sh@973 -- # kill 70417 00:06:10.374 01:47:20 event.app_repeat -- common/autotest_common.sh@978 -- # wait 70417 00:06:10.374 spdk_app_start is called in Round 0. 00:06:10.374 Shutdown signal received, stop current app iteration 00:06:10.374 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 reinitialization... 00:06:10.374 spdk_app_start is called in Round 1. 00:06:10.374 Shutdown signal received, stop current app iteration 00:06:10.374 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 reinitialization... 00:06:10.374 spdk_app_start is called in Round 2. 00:06:10.374 Shutdown signal received, stop current app iteration 00:06:10.374 Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 reinitialization... 00:06:10.374 spdk_app_start is called in Round 3. 00:06:10.374 Shutdown signal received, stop current app iteration 00:06:10.374 01:47:20 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:10.374 01:47:20 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:10.374 00:06:10.374 real 0m18.465s 00:06:10.374 user 0m42.609s 00:06:10.374 sys 0m2.439s 00:06:10.374 01:47:20 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.374 01:47:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.374 ************************************ 00:06:10.374 END TEST app_repeat 00:06:10.374 ************************************ 00:06:10.374 01:47:20 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:10.374 01:47:20 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:10.374 01:47:20 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.374 01:47:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.374 01:47:20 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.633 ************************************ 00:06:10.633 START TEST cpu_locks 00:06:10.633 ************************************ 00:06:10.633 01:47:20 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:10.633 * Looking for test storage... 00:06:10.633 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:10.633 01:47:21 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:10.633 01:47:21 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:10.633 01:47:21 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:10.633 01:47:21 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:10.633 01:47:21 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.633 01:47:21 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.634 01:47:21 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.634 01:47:21 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.634 01:47:21 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.634 01:47:21 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.634 01:47:21 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.634 01:47:21 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.634 01:47:21 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.634 01:47:21 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.634 01:47:21 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.634 01:47:21 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:10.634 01:47:21 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:10.634 01:47:21 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.634 01:47:21 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.634 01:47:21 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:10.634 01:47:21 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:10.634 01:47:21 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.634 01:47:21 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:10.634 01:47:21 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.634 01:47:21 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:10.634 01:47:21 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:10.634 01:47:21 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.634 01:47:21 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:10.634 01:47:21 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.634 01:47:21 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.634 01:47:21 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.634 01:47:21 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:10.634 01:47:21 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.634 01:47:21 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:10.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.634 --rc genhtml_branch_coverage=1 00:06:10.634 --rc genhtml_function_coverage=1 00:06:10.634 --rc genhtml_legend=1 00:06:10.634 --rc geninfo_all_blocks=1 00:06:10.634 --rc geninfo_unexecuted_blocks=1 00:06:10.634 00:06:10.634 ' 00:06:10.634 01:47:21 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:10.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.634 --rc genhtml_branch_coverage=1 00:06:10.634 --rc genhtml_function_coverage=1 00:06:10.634 --rc genhtml_legend=1 00:06:10.634 --rc geninfo_all_blocks=1 00:06:10.634 --rc geninfo_unexecuted_blocks=1 00:06:10.634 00:06:10.634 ' 00:06:10.634 01:47:21 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:10.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.634 --rc genhtml_branch_coverage=1 00:06:10.634 --rc genhtml_function_coverage=1 00:06:10.634 --rc genhtml_legend=1 00:06:10.634 --rc geninfo_all_blocks=1 00:06:10.634 --rc geninfo_unexecuted_blocks=1 00:06:10.634 00:06:10.634 ' 00:06:10.634 01:47:21 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:10.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.634 --rc genhtml_branch_coverage=1 00:06:10.634 --rc genhtml_function_coverage=1 00:06:10.634 --rc genhtml_legend=1 00:06:10.634 --rc geninfo_all_blocks=1 00:06:10.634 --rc geninfo_unexecuted_blocks=1 00:06:10.634 00:06:10.634 ' 00:06:10.634 01:47:21 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:10.634 01:47:21 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:10.634 01:47:21 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:10.634 01:47:21 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:10.634 01:47:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.634 01:47:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.634 01:47:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.634 ************************************ 00:06:10.634 START TEST default_locks 00:06:10.634 ************************************ 00:06:10.634 01:47:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:10.634 01:47:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70852 00:06:10.634 01:47:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70852 00:06:10.634 01:47:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.634 01:47:21 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 70852 ']' 00:06:10.634 01:47:21 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.634 01:47:21 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.634 01:47:21 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.634 01:47:21 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.634 01:47:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.893 [2024-11-19 01:47:21.250717] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:10.893 [2024-11-19 01:47:21.250808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70852 ] 00:06:10.893 [2024-11-19 01:47:21.394880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.893 [2024-11-19 01:47:21.414584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.893 [2024-11-19 01:47:21.448326] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.152 01:47:21 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.152 01:47:21 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:11.152 01:47:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70852 00:06:11.152 01:47:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70852 00:06:11.153 01:47:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.412 01:47:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70852 00:06:11.412 01:47:21 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 70852 ']' 00:06:11.412 01:47:21 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 70852 00:06:11.412 01:47:21 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:11.412 01:47:21 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.412 01:47:21 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70852 00:06:11.412 01:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.412 killing process with pid 70852 00:06:11.412 01:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.412 01:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70852' 00:06:11.412 01:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 70852 00:06:11.412 01:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 70852 00:06:11.672 01:47:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70852 00:06:11.672 01:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:11.672 01:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 70852 00:06:11.672 01:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:11.672 01:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.672 01:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:11.672 01:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.672 01:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 70852 00:06:11.672 01:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 70852 ']' 00:06:11.672 01:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.672 01:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.672 01:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.672 01:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.672 01:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.672 ERROR: process (pid: 70852) is no longer running 00:06:11.672 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (70852) - No such process 00:06:11.672 01:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.672 01:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:11.672 01:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:11.672 01:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:11.672 01:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:11.672 01:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:11.672 01:47:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:11.672 01:47:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:11.672 01:47:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:11.672 01:47:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:11.672 00:06:11.672 real 0m1.050s 00:06:11.672 user 0m1.070s 00:06:11.672 sys 0m0.412s 00:06:11.672 01:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.672 01:47:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.672 ************************************ 00:06:11.672 END TEST default_locks 00:06:11.672 ************************************ 00:06:11.672 01:47:22 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:11.672 01:47:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.672 01:47:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.672 01:47:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.932 ************************************ 00:06:11.932 START TEST default_locks_via_rpc 00:06:11.932 ************************************ 00:06:11.932 01:47:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:11.932 01:47:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70897 00:06:11.932 01:47:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.932 01:47:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 70897 00:06:11.932 01:47:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 70897 ']' 00:06:11.932 01:47:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.932 01:47:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.932 01:47:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.932 01:47:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.932 01:47:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.932 [2024-11-19 01:47:22.356386] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:11.932 [2024-11-19 01:47:22.356487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70897 ] 00:06:11.932 [2024-11-19 01:47:22.503701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.932 [2024-11-19 01:47:22.522822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.191 [2024-11-19 01:47:22.558733] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:12.191 01:47:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.191 01:47:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:12.191 01:47:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:12.191 01:47:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.191 01:47:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.191 01:47:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.191 01:47:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:12.191 01:47:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:12.191 01:47:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:12.191 01:47:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:12.191 01:47:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:12.191 01:47:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.191 01:47:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.191 01:47:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.191 01:47:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 70897 00:06:12.191 01:47:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 70897 00:06:12.191 01:47:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:12.760 01:47:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 70897 00:06:12.760 01:47:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 70897 ']' 00:06:12.760 01:47:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 70897 00:06:12.760 01:47:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:12.760 01:47:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.760 01:47:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70897 00:06:12.760 01:47:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:12.760 killing process with pid 70897 00:06:12.760 01:47:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:12.760 01:47:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70897' 00:06:12.760 01:47:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 70897 00:06:12.760 01:47:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 70897 00:06:12.760 00:06:12.760 real 0m1.055s 00:06:12.760 user 0m1.149s 00:06:12.760 sys 0m0.395s 00:06:12.760 01:47:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.760 01:47:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.760 ************************************ 00:06:12.760 END TEST default_locks_via_rpc 00:06:12.760 ************************************ 00:06:13.019 01:47:23 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:13.019 01:47:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.019 01:47:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.019 01:47:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.019 ************************************ 00:06:13.019 START TEST non_locking_app_on_locked_coremask 00:06:13.019 ************************************ 00:06:13.019 01:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:13.019 01:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=70936 00:06:13.019 01:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 70936 /var/tmp/spdk.sock 00:06:13.019 01:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.019 01:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70936 ']' 00:06:13.019 01:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.019 01:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.019 01:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.019 01:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.019 01:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.019 [2024-11-19 01:47:23.452236] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:13.019 [2024-11-19 01:47:23.452321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70936 ] 00:06:13.019 [2024-11-19 01:47:23.591930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.019 [2024-11-19 01:47:23.611067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.278 [2024-11-19 01:47:23.646323] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.278 01:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.278 01:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:13.278 01:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=70944 00:06:13.278 01:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:13.278 01:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 70944 /var/tmp/spdk2.sock 00:06:13.278 01:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70944 ']' 00:06:13.278 01:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.278 01:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.278 01:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.278 01:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.278 01:47:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.278 [2024-11-19 01:47:23.814469] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:13.278 [2024-11-19 01:47:23.814589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70944 ] 00:06:13.538 [2024-11-19 01:47:23.966069] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:13.538 [2024-11-19 01:47:23.966119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.538 [2024-11-19 01:47:24.005251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.538 [2024-11-19 01:47:24.076321] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.797 01:47:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.798 01:47:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:13.798 01:47:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 70936 00:06:13.798 01:47:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70936 00:06:13.798 01:47:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.735 01:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 70936 00:06:14.735 01:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70936 ']' 00:06:14.735 01:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70936 00:06:14.735 01:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:14.735 01:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.735 01:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70936 00:06:14.735 01:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.735 killing process with pid 70936 00:06:14.735 01:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.735 01:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70936' 00:06:14.735 01:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70936 00:06:14.735 01:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70936 00:06:14.994 01:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 70944 00:06:14.994 01:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70944 ']' 00:06:14.994 01:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70944 00:06:14.994 01:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:14.994 01:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.994 01:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70944 00:06:15.253 01:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.253 killing process with pid 70944 00:06:15.253 01:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.253 01:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70944' 00:06:15.253 01:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70944 00:06:15.253 01:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70944 00:06:15.253 00:06:15.253 real 0m2.447s 00:06:15.253 user 0m2.741s 00:06:15.253 sys 0m0.842s 00:06:15.253 01:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.253 ************************************ 00:06:15.253 END TEST non_locking_app_on_locked_coremask 00:06:15.253 ************************************ 00:06:15.253 01:47:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.513 01:47:25 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:15.513 01:47:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.513 01:47:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.513 01:47:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.513 ************************************ 00:06:15.513 START TEST locking_app_on_unlocked_coremask 00:06:15.513 ************************************ 00:06:15.513 01:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:15.513 01:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=70998 00:06:15.513 01:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 70998 /var/tmp/spdk.sock 00:06:15.513 01:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:15.513 01:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70998 ']' 00:06:15.513 01:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.513 01:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.513 01:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.513 01:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.513 01:47:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.513 [2024-11-19 01:47:25.962375] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:15.513 [2024-11-19 01:47:25.962472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70998 ] 00:06:15.514 [2024-11-19 01:47:26.108110] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:15.514 [2024-11-19 01:47:26.108144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.514 [2024-11-19 01:47:26.128578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.774 [2024-11-19 01:47:26.166085] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.774 01:47:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.774 01:47:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:15.774 01:47:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71001 00:06:15.774 01:47:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71001 /var/tmp/spdk2.sock 00:06:15.774 01:47:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:15.774 01:47:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71001 ']' 00:06:15.774 01:47:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.774 01:47:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.774 01:47:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.774 01:47:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.774 01:47:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.774 [2024-11-19 01:47:26.366715] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:15.774 [2024-11-19 01:47:26.366825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71001 ] 00:06:16.033 [2024-11-19 01:47:26.532015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.033 [2024-11-19 01:47:26.575111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.292 [2024-11-19 01:47:26.654706] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:16.862 01:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.862 01:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:16.862 01:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71001 00:06:16.862 01:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71001 00:06:16.862 01:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.429 01:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 70998 00:06:17.429 01:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70998 ']' 00:06:17.430 01:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 70998 00:06:17.430 01:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:17.430 01:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.430 01:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70998 00:06:17.430 01:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.430 killing process with pid 70998 00:06:17.430 01:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.430 01:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70998' 00:06:17.430 01:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 70998 00:06:17.430 01:47:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 70998 00:06:17.998 01:47:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71001 00:06:17.998 01:47:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71001 ']' 00:06:17.998 01:47:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 71001 00:06:17.998 01:47:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:17.998 01:47:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.998 01:47:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71001 00:06:17.998 01:47:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.998 01:47:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.998 killing process with pid 71001 00:06:17.998 01:47:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71001' 00:06:17.998 01:47:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 71001 00:06:17.998 01:47:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 71001 00:06:18.256 00:06:18.256 real 0m2.738s 00:06:18.256 user 0m3.224s 00:06:18.256 sys 0m0.766s 00:06:18.256 01:47:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.256 ************************************ 00:06:18.256 END TEST locking_app_on_unlocked_coremask 00:06:18.256 ************************************ 00:06:18.256 01:47:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.256 01:47:28 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:18.256 01:47:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.256 01:47:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.256 01:47:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.256 ************************************ 00:06:18.256 START TEST locking_app_on_locked_coremask 00:06:18.256 ************************************ 00:06:18.256 01:47:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:18.256 01:47:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71068 00:06:18.256 01:47:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71068 /var/tmp/spdk.sock 00:06:18.256 01:47:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71068 ']' 00:06:18.256 01:47:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.256 01:47:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.256 01:47:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.256 01:47:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.256 01:47:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.256 01:47:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.256 [2024-11-19 01:47:28.746358] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:18.256 [2024-11-19 01:47:28.746475] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71068 ] 00:06:18.514 [2024-11-19 01:47:28.885482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.514 [2024-11-19 01:47:28.904017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.514 [2024-11-19 01:47:28.938069] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:19.449 01:47:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.449 01:47:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:19.449 01:47:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71084 00:06:19.449 01:47:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:19.449 01:47:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71084 /var/tmp/spdk2.sock 00:06:19.449 01:47:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:19.449 01:47:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71084 /var/tmp/spdk2.sock 00:06:19.449 01:47:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:19.449 01:47:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.449 01:47:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:19.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.449 01:47:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.449 01:47:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 71084 /var/tmp/spdk2.sock 00:06:19.449 01:47:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71084 ']' 00:06:19.450 01:47:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.450 01:47:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.450 01:47:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.450 01:47:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.450 01:47:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.450 [2024-11-19 01:47:29.748983] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:19.450 [2024-11-19 01:47:29.749238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71084 ] 00:06:19.450 [2024-11-19 01:47:29.908571] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71068 has claimed it. 00:06:19.450 [2024-11-19 01:47:29.908663] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:20.014 ERROR: process (pid: 71084) is no longer running 00:06:20.014 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71084) - No such process 00:06:20.014 01:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.014 01:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:20.014 01:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:20.014 01:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:20.015 01:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:20.015 01:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:20.015 01:47:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71068 00:06:20.015 01:47:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.015 01:47:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71068 00:06:20.274 01:47:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71068 00:06:20.274 01:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71068 ']' 00:06:20.274 01:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71068 00:06:20.274 01:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:20.274 01:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.274 01:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71068 00:06:20.274 killing process with pid 71068 00:06:20.274 01:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.274 01:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.274 01:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71068' 00:06:20.274 01:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71068 00:06:20.274 01:47:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71068 00:06:20.589 00:06:20.589 real 0m2.354s 00:06:20.589 user 0m2.880s 00:06:20.589 sys 0m0.450s 00:06:20.589 01:47:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.589 01:47:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.589 ************************************ 00:06:20.589 END TEST locking_app_on_locked_coremask 00:06:20.589 ************************************ 00:06:20.589 01:47:31 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:20.589 01:47:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.589 01:47:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.589 01:47:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.589 ************************************ 00:06:20.589 START TEST locking_overlapped_coremask 00:06:20.589 ************************************ 00:06:20.589 01:47:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:20.589 01:47:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71124 00:06:20.589 01:47:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71124 /var/tmp/spdk.sock 00:06:20.589 01:47:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:20.589 01:47:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71124 ']' 00:06:20.589 01:47:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.589 01:47:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.589 01:47:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.589 01:47:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.589 01:47:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.589 [2024-11-19 01:47:31.146780] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:20.589 [2024-11-19 01:47:31.146876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71124 ] 00:06:20.872 [2024-11-19 01:47:31.291353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:20.872 [2024-11-19 01:47:31.315372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.872 [2024-11-19 01:47:31.315442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.872 [2024-11-19 01:47:31.315448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.872 [2024-11-19 01:47:31.353997] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:20.872 01:47:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.872 01:47:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:20.872 01:47:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71129 00:06:20.872 01:47:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71129 /var/tmp/spdk2.sock 00:06:20.872 01:47:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:20.872 01:47:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:20.872 01:47:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71129 /var/tmp/spdk2.sock 00:06:20.872 01:47:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:20.872 01:47:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.872 01:47:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:20.872 01:47:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.872 01:47:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 71129 /var/tmp/spdk2.sock 00:06:20.872 01:47:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71129 ']' 00:06:20.872 01:47:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.872 01:47:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.872 01:47:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.872 01:47:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.872 01:47:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.131 [2024-11-19 01:47:31.540156] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:21.132 [2024-11-19 01:47:31.540264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71129 ] 00:06:21.132 [2024-11-19 01:47:31.701844] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71124 has claimed it. 00:06:21.132 [2024-11-19 01:47:31.701913] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:21.699 ERROR: process (pid: 71129) is no longer running 00:06:21.699 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71129) - No such process 00:06:21.699 01:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.699 01:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:21.699 01:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:21.699 01:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:21.699 01:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:21.699 01:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:21.699 01:47:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:21.699 01:47:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:21.699 01:47:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:21.700 01:47:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:21.700 01:47:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71124 00:06:21.700 01:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 71124 ']' 00:06:21.700 01:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 71124 00:06:21.700 01:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:21.700 01:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.700 01:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71124 00:06:21.700 01:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.700 01:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.700 killing process with pid 71124 00:06:21.700 01:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71124' 00:06:21.700 01:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 71124 00:06:21.700 01:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 71124 00:06:21.959 00:06:21.959 real 0m1.394s 00:06:21.959 user 0m3.844s 00:06:21.959 sys 0m0.303s 00:06:21.959 ************************************ 00:06:21.959 END TEST locking_overlapped_coremask 00:06:21.959 ************************************ 00:06:21.959 01:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.959 01:47:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.959 01:47:32 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:21.959 01:47:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.959 01:47:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.959 01:47:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.959 ************************************ 00:06:21.959 START TEST locking_overlapped_coremask_via_rpc 00:06:21.959 ************************************ 00:06:21.959 01:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:21.959 01:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71175 00:06:21.959 01:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71175 /var/tmp/spdk.sock 00:06:21.959 01:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:21.959 01:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71175 ']' 00:06:21.959 01:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.959 01:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.959 01:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.959 01:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.959 01:47:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.218 [2024-11-19 01:47:32.582160] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:22.218 [2024-11-19 01:47:32.582410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71175 ] 00:06:22.218 [2024-11-19 01:47:32.722658] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:22.218 [2024-11-19 01:47:32.722693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:22.218 [2024-11-19 01:47:32.743605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.218 [2024-11-19 01:47:32.743720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.218 [2024-11-19 01:47:32.743942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.218 [2024-11-19 01:47:32.782426] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:23.155 01:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.155 01:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:23.155 01:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71193 00:06:23.155 01:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:23.155 01:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71193 /var/tmp/spdk2.sock 00:06:23.155 01:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71193 ']' 00:06:23.155 01:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.155 01:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.155 01:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.155 01:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.155 01:47:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.155 [2024-11-19 01:47:33.577314] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:23.155 [2024-11-19 01:47:33.577988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71193 ] 00:06:23.155 [2024-11-19 01:47:33.733163] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:23.155 [2024-11-19 01:47:33.733214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.413 [2024-11-19 01:47:33.770689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.413 [2024-11-19 01:47:33.774815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:23.413 [2024-11-19 01:47:33.774816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.413 [2024-11-19 01:47:33.843988] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:23.672 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.672 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:23.672 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:23.672 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.672 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.672 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.672 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:23.672 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:23.673 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:23.673 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:23.673 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:23.673 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:23.673 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:23.673 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:23.673 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.673 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.673 [2024-11-19 01:47:34.054661] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71175 has claimed it. 00:06:23.673 request: 00:06:23.673 { 00:06:23.673 "method": "framework_enable_cpumask_locks", 00:06:23.673 "req_id": 1 00:06:23.673 } 00:06:23.673 Got JSON-RPC error response 00:06:23.673 response: 00:06:23.673 { 00:06:23.673 "code": -32603, 00:06:23.673 "message": "Failed to claim CPU core: 2" 00:06:23.673 } 00:06:23.673 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:23.673 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:23.673 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:23.673 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:23.673 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:23.673 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71175 /var/tmp/spdk.sock 00:06:23.673 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71175 ']' 00:06:23.673 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.673 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.673 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.673 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.673 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.931 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.931 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:23.931 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71193 /var/tmp/spdk2.sock 00:06:23.931 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71193 ']' 00:06:23.931 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.932 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.932 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.932 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.932 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.190 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.190 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:24.190 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:24.190 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:24.190 ************************************ 00:06:24.190 END TEST locking_overlapped_coremask_via_rpc 00:06:24.190 ************************************ 00:06:24.190 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:24.190 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:24.190 00:06:24.190 real 0m2.059s 00:06:24.190 user 0m1.159s 00:06:24.190 sys 0m0.156s 00:06:24.190 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.190 01:47:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.190 01:47:34 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:24.190 01:47:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71175 ]] 00:06:24.190 01:47:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71175 00:06:24.190 01:47:34 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71175 ']' 00:06:24.190 01:47:34 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71175 00:06:24.190 01:47:34 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:24.190 01:47:34 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.190 01:47:34 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71175 00:06:24.190 killing process with pid 71175 00:06:24.190 01:47:34 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:24.190 01:47:34 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:24.190 01:47:34 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71175' 00:06:24.190 01:47:34 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71175 00:06:24.190 01:47:34 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71175 00:06:24.449 01:47:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71193 ]] 00:06:24.449 01:47:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71193 00:06:24.449 01:47:34 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71193 ']' 00:06:24.449 01:47:34 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71193 00:06:24.449 01:47:34 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:24.449 01:47:34 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.449 01:47:34 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71193 00:06:24.449 killing process with pid 71193 00:06:24.449 01:47:34 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:24.449 01:47:34 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:24.449 01:47:34 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71193' 00:06:24.449 01:47:34 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71193 00:06:24.449 01:47:34 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71193 00:06:24.708 01:47:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:24.708 Process with pid 71175 is not found 00:06:24.708 01:47:35 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:24.708 01:47:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71175 ]] 00:06:24.708 01:47:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71175 00:06:24.708 01:47:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71175 ']' 00:06:24.708 01:47:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71175 00:06:24.708 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71175) - No such process 00:06:24.708 01:47:35 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71175 is not found' 00:06:24.708 01:47:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71193 ]] 00:06:24.708 01:47:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71193 00:06:24.708 01:47:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71193 ']' 00:06:24.708 01:47:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71193 00:06:24.708 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71193) - No such process 00:06:24.708 Process with pid 71193 is not found 00:06:24.708 01:47:35 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71193 is not found' 00:06:24.708 01:47:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:24.708 00:06:24.708 real 0m14.151s 00:06:24.708 user 0m25.318s 00:06:24.708 sys 0m3.970s 00:06:24.708 01:47:35 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.708 01:47:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.708 ************************************ 00:06:24.708 END TEST cpu_locks 00:06:24.708 ************************************ 00:06:24.708 ************************************ 00:06:24.708 END TEST event 00:06:24.708 ************************************ 00:06:24.708 00:06:24.708 real 0m39.690s 00:06:24.708 user 1m18.174s 00:06:24.708 sys 0m7.059s 00:06:24.708 01:47:35 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.708 01:47:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.708 01:47:35 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:24.708 01:47:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.708 01:47:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.708 01:47:35 -- common/autotest_common.sh@10 -- # set +x 00:06:24.708 ************************************ 00:06:24.708 START TEST thread 00:06:24.708 ************************************ 00:06:24.708 01:47:35 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:24.708 * Looking for test storage... 00:06:24.708 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:24.708 01:47:35 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:24.708 01:47:35 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:24.708 01:47:35 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:24.967 01:47:35 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:24.967 01:47:35 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.967 01:47:35 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.967 01:47:35 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.967 01:47:35 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.967 01:47:35 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.967 01:47:35 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.967 01:47:35 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.967 01:47:35 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.968 01:47:35 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.968 01:47:35 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.968 01:47:35 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.968 01:47:35 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:24.968 01:47:35 thread -- scripts/common.sh@345 -- # : 1 00:06:24.968 01:47:35 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.968 01:47:35 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.968 01:47:35 thread -- scripts/common.sh@365 -- # decimal 1 00:06:24.968 01:47:35 thread -- scripts/common.sh@353 -- # local d=1 00:06:24.968 01:47:35 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.968 01:47:35 thread -- scripts/common.sh@355 -- # echo 1 00:06:24.968 01:47:35 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.968 01:47:35 thread -- scripts/common.sh@366 -- # decimal 2 00:06:24.968 01:47:35 thread -- scripts/common.sh@353 -- # local d=2 00:06:24.968 01:47:35 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.968 01:47:35 thread -- scripts/common.sh@355 -- # echo 2 00:06:24.968 01:47:35 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.968 01:47:35 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.968 01:47:35 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.968 01:47:35 thread -- scripts/common.sh@368 -- # return 0 00:06:24.968 01:47:35 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.968 01:47:35 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:24.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.968 --rc genhtml_branch_coverage=1 00:06:24.968 --rc genhtml_function_coverage=1 00:06:24.968 --rc genhtml_legend=1 00:06:24.968 --rc geninfo_all_blocks=1 00:06:24.968 --rc geninfo_unexecuted_blocks=1 00:06:24.968 00:06:24.968 ' 00:06:24.968 01:47:35 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:24.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.968 --rc genhtml_branch_coverage=1 00:06:24.968 --rc genhtml_function_coverage=1 00:06:24.968 --rc genhtml_legend=1 00:06:24.968 --rc geninfo_all_blocks=1 00:06:24.968 --rc geninfo_unexecuted_blocks=1 00:06:24.968 00:06:24.968 ' 00:06:24.968 01:47:35 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:24.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.968 --rc genhtml_branch_coverage=1 00:06:24.968 --rc genhtml_function_coverage=1 00:06:24.968 --rc genhtml_legend=1 00:06:24.968 --rc geninfo_all_blocks=1 00:06:24.968 --rc geninfo_unexecuted_blocks=1 00:06:24.968 00:06:24.968 ' 00:06:24.968 01:47:35 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:24.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.968 --rc genhtml_branch_coverage=1 00:06:24.968 --rc genhtml_function_coverage=1 00:06:24.968 --rc genhtml_legend=1 00:06:24.968 --rc geninfo_all_blocks=1 00:06:24.968 --rc geninfo_unexecuted_blocks=1 00:06:24.968 00:06:24.968 ' 00:06:24.968 01:47:35 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:24.968 01:47:35 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:24.968 01:47:35 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.968 01:47:35 thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.968 ************************************ 00:06:24.968 START TEST thread_poller_perf 00:06:24.968 ************************************ 00:06:24.968 01:47:35 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:24.968 [2024-11-19 01:47:35.423405] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:24.968 [2024-11-19 01:47:35.423529] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71312 ] 00:06:24.968 [2024-11-19 01:47:35.567100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.227 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:25.227 [2024-11-19 01:47:35.588820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.164 [2024-11-19T01:47:36.779Z] ====================================== 00:06:26.164 [2024-11-19T01:47:36.779Z] busy:2207859988 (cyc) 00:06:26.164 [2024-11-19T01:47:36.779Z] total_run_count: 369000 00:06:26.164 [2024-11-19T01:47:36.779Z] tsc_hz: 2200000000 (cyc) 00:06:26.164 [2024-11-19T01:47:36.779Z] ====================================== 00:06:26.164 [2024-11-19T01:47:36.779Z] poller_cost: 5983 (cyc), 2719 (nsec) 00:06:26.164 00:06:26.164 real 0m1.226s 00:06:26.164 user 0m1.080s 00:06:26.164 sys 0m0.039s 00:06:26.164 01:47:36 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.164 01:47:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:26.164 ************************************ 00:06:26.164 END TEST thread_poller_perf 00:06:26.164 ************************************ 00:06:26.164 01:47:36 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:26.164 01:47:36 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:26.164 01:47:36 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.164 01:47:36 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.164 ************************************ 00:06:26.164 START TEST thread_poller_perf 00:06:26.164 ************************************ 00:06:26.164 01:47:36 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:26.164 [2024-11-19 01:47:36.697429] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:26.164 [2024-11-19 01:47:36.697546] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71348 ] 00:06:26.424 [2024-11-19 01:47:36.842786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.424 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:26.424 [2024-11-19 01:47:36.861253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.361 [2024-11-19T01:47:37.976Z] ====================================== 00:06:27.361 [2024-11-19T01:47:37.976Z] busy:2201970060 (cyc) 00:06:27.361 [2024-11-19T01:47:37.976Z] total_run_count: 4932000 00:06:27.361 [2024-11-19T01:47:37.976Z] tsc_hz: 2200000000 (cyc) 00:06:27.361 [2024-11-19T01:47:37.976Z] ====================================== 00:06:27.361 [2024-11-19T01:47:37.976Z] poller_cost: 446 (cyc), 202 (nsec) 00:06:27.361 00:06:27.361 real 0m1.211s 00:06:27.361 user 0m1.080s 00:06:27.361 sys 0m0.025s 00:06:27.361 01:47:37 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.361 01:47:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:27.361 ************************************ 00:06:27.361 END TEST thread_poller_perf 00:06:27.361 ************************************ 00:06:27.361 01:47:37 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:27.361 00:06:27.361 real 0m2.716s 00:06:27.361 user 0m2.283s 00:06:27.361 sys 0m0.208s 00:06:27.361 01:47:37 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.361 01:47:37 thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.361 ************************************ 00:06:27.361 END TEST thread 00:06:27.361 ************************************ 00:06:27.620 01:47:37 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:27.620 01:47:37 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:27.620 01:47:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.620 01:47:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.620 01:47:37 -- common/autotest_common.sh@10 -- # set +x 00:06:27.620 ************************************ 00:06:27.620 START TEST app_cmdline 00:06:27.620 ************************************ 00:06:27.620 01:47:37 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:27.620 * Looking for test storage... 00:06:27.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:27.620 01:47:38 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:27.620 01:47:38 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:27.620 01:47:38 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:27.620 01:47:38 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:27.620 01:47:38 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.620 01:47:38 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.620 01:47:38 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.620 01:47:38 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.620 01:47:38 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.620 01:47:38 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.620 01:47:38 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.620 01:47:38 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.620 01:47:38 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.620 01:47:38 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.620 01:47:38 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.620 01:47:38 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:27.620 01:47:38 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:27.620 01:47:38 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.620 01:47:38 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.620 01:47:38 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:27.620 01:47:38 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:27.620 01:47:38 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.620 01:47:38 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:27.620 01:47:38 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.620 01:47:38 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:27.620 01:47:38 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:27.621 01:47:38 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.621 01:47:38 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:27.621 01:47:38 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.621 01:47:38 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.621 01:47:38 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.621 01:47:38 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:27.621 01:47:38 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.621 01:47:38 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:27.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.621 --rc genhtml_branch_coverage=1 00:06:27.621 --rc genhtml_function_coverage=1 00:06:27.621 --rc genhtml_legend=1 00:06:27.621 --rc geninfo_all_blocks=1 00:06:27.621 --rc geninfo_unexecuted_blocks=1 00:06:27.621 00:06:27.621 ' 00:06:27.621 01:47:38 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:27.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.621 --rc genhtml_branch_coverage=1 00:06:27.621 --rc genhtml_function_coverage=1 00:06:27.621 --rc genhtml_legend=1 00:06:27.621 --rc geninfo_all_blocks=1 00:06:27.621 --rc geninfo_unexecuted_blocks=1 00:06:27.621 00:06:27.621 ' 00:06:27.621 01:47:38 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:27.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.621 --rc genhtml_branch_coverage=1 00:06:27.621 --rc genhtml_function_coverage=1 00:06:27.621 --rc genhtml_legend=1 00:06:27.621 --rc geninfo_all_blocks=1 00:06:27.621 --rc geninfo_unexecuted_blocks=1 00:06:27.621 00:06:27.621 ' 00:06:27.621 01:47:38 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:27.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.621 --rc genhtml_branch_coverage=1 00:06:27.621 --rc genhtml_function_coverage=1 00:06:27.621 --rc genhtml_legend=1 00:06:27.621 --rc geninfo_all_blocks=1 00:06:27.621 --rc geninfo_unexecuted_blocks=1 00:06:27.621 00:06:27.621 ' 00:06:27.621 01:47:38 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:27.621 01:47:38 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71430 00:06:27.621 01:47:38 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:27.621 01:47:38 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71430 00:06:27.621 01:47:38 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 71430 ']' 00:06:27.621 01:47:38 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.621 01:47:38 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.621 01:47:38 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.621 01:47:38 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.621 01:47:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:27.621 [2024-11-19 01:47:38.234234] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:27.621 [2024-11-19 01:47:38.234350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71430 ] 00:06:27.880 [2024-11-19 01:47:38.382179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.880 [2024-11-19 01:47:38.401935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.880 [2024-11-19 01:47:38.436409] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:28.140 01:47:38 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.140 01:47:38 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:28.140 01:47:38 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:28.399 { 00:06:28.399 "version": "SPDK v25.01-pre git sha1 d47eb51c9", 00:06:28.399 "fields": { 00:06:28.399 "major": 25, 00:06:28.399 "minor": 1, 00:06:28.399 "patch": 0, 00:06:28.399 "suffix": "-pre", 00:06:28.399 "commit": "d47eb51c9" 00:06:28.399 } 00:06:28.399 } 00:06:28.399 01:47:38 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:28.399 01:47:38 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:28.399 01:47:38 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:28.399 01:47:38 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:28.399 01:47:38 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:28.399 01:47:38 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.399 01:47:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:28.399 01:47:38 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:28.399 01:47:38 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:28.399 01:47:38 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.399 01:47:38 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:28.399 01:47:38 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:28.399 01:47:38 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.399 01:47:38 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:28.399 01:47:38 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.399 01:47:38 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:28.399 01:47:38 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.399 01:47:38 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:28.399 01:47:38 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.399 01:47:38 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:28.399 01:47:38 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:28.399 01:47:38 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:28.399 01:47:38 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:28.399 01:47:38 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:28.659 request: 00:06:28.659 { 00:06:28.659 "method": "env_dpdk_get_mem_stats", 00:06:28.659 "req_id": 1 00:06:28.659 } 00:06:28.659 Got JSON-RPC error response 00:06:28.659 response: 00:06:28.659 { 00:06:28.659 "code": -32601, 00:06:28.659 "message": "Method not found" 00:06:28.659 } 00:06:28.659 01:47:39 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:28.659 01:47:39 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:28.659 01:47:39 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:28.659 01:47:39 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:28.659 01:47:39 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71430 00:06:28.659 01:47:39 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 71430 ']' 00:06:28.659 01:47:39 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 71430 00:06:28.659 01:47:39 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:28.659 01:47:39 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.659 01:47:39 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71430 00:06:28.659 killing process with pid 71430 00:06:28.659 01:47:39 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.659 01:47:39 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.659 01:47:39 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71430' 00:06:28.659 01:47:39 app_cmdline -- common/autotest_common.sh@973 -- # kill 71430 00:06:28.659 01:47:39 app_cmdline -- common/autotest_common.sh@978 -- # wait 71430 00:06:28.917 00:06:28.917 real 0m1.445s 00:06:28.917 user 0m1.966s 00:06:28.917 sys 0m0.332s 00:06:28.917 01:47:39 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.917 ************************************ 00:06:28.917 END TEST app_cmdline 00:06:28.917 ************************************ 00:06:28.917 01:47:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:28.917 01:47:39 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:28.917 01:47:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.917 01:47:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.917 01:47:39 -- common/autotest_common.sh@10 -- # set +x 00:06:28.917 ************************************ 00:06:28.917 START TEST version 00:06:28.917 ************************************ 00:06:28.917 01:47:39 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:29.176 * Looking for test storage... 00:06:29.176 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:29.176 01:47:39 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:29.176 01:47:39 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:29.176 01:47:39 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:29.176 01:47:39 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:29.176 01:47:39 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.176 01:47:39 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.176 01:47:39 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.176 01:47:39 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.176 01:47:39 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.176 01:47:39 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.176 01:47:39 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.176 01:47:39 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.176 01:47:39 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.176 01:47:39 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.176 01:47:39 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.176 01:47:39 version -- scripts/common.sh@344 -- # case "$op" in 00:06:29.176 01:47:39 version -- scripts/common.sh@345 -- # : 1 00:06:29.176 01:47:39 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.176 01:47:39 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.176 01:47:39 version -- scripts/common.sh@365 -- # decimal 1 00:06:29.176 01:47:39 version -- scripts/common.sh@353 -- # local d=1 00:06:29.176 01:47:39 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.176 01:47:39 version -- scripts/common.sh@355 -- # echo 1 00:06:29.176 01:47:39 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.176 01:47:39 version -- scripts/common.sh@366 -- # decimal 2 00:06:29.176 01:47:39 version -- scripts/common.sh@353 -- # local d=2 00:06:29.176 01:47:39 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.176 01:47:39 version -- scripts/common.sh@355 -- # echo 2 00:06:29.176 01:47:39 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.176 01:47:39 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.176 01:47:39 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.176 01:47:39 version -- scripts/common.sh@368 -- # return 0 00:06:29.176 01:47:39 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.176 01:47:39 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:29.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.176 --rc genhtml_branch_coverage=1 00:06:29.176 --rc genhtml_function_coverage=1 00:06:29.176 --rc genhtml_legend=1 00:06:29.176 --rc geninfo_all_blocks=1 00:06:29.176 --rc geninfo_unexecuted_blocks=1 00:06:29.176 00:06:29.176 ' 00:06:29.176 01:47:39 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:29.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.176 --rc genhtml_branch_coverage=1 00:06:29.176 --rc genhtml_function_coverage=1 00:06:29.176 --rc genhtml_legend=1 00:06:29.176 --rc geninfo_all_blocks=1 00:06:29.176 --rc geninfo_unexecuted_blocks=1 00:06:29.176 00:06:29.176 ' 00:06:29.176 01:47:39 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:29.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.176 --rc genhtml_branch_coverage=1 00:06:29.176 --rc genhtml_function_coverage=1 00:06:29.176 --rc genhtml_legend=1 00:06:29.176 --rc geninfo_all_blocks=1 00:06:29.176 --rc geninfo_unexecuted_blocks=1 00:06:29.176 00:06:29.176 ' 00:06:29.176 01:47:39 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:29.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.176 --rc genhtml_branch_coverage=1 00:06:29.176 --rc genhtml_function_coverage=1 00:06:29.176 --rc genhtml_legend=1 00:06:29.176 --rc geninfo_all_blocks=1 00:06:29.176 --rc geninfo_unexecuted_blocks=1 00:06:29.176 00:06:29.176 ' 00:06:29.176 01:47:39 version -- app/version.sh@17 -- # get_header_version major 00:06:29.176 01:47:39 version -- app/version.sh@14 -- # cut -f2 00:06:29.176 01:47:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:29.176 01:47:39 version -- app/version.sh@14 -- # tr -d '"' 00:06:29.176 01:47:39 version -- app/version.sh@17 -- # major=25 00:06:29.176 01:47:39 version -- app/version.sh@18 -- # get_header_version minor 00:06:29.176 01:47:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:29.176 01:47:39 version -- app/version.sh@14 -- # cut -f2 00:06:29.176 01:47:39 version -- app/version.sh@14 -- # tr -d '"' 00:06:29.176 01:47:39 version -- app/version.sh@18 -- # minor=1 00:06:29.176 01:47:39 version -- app/version.sh@19 -- # get_header_version patch 00:06:29.176 01:47:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:29.176 01:47:39 version -- app/version.sh@14 -- # cut -f2 00:06:29.176 01:47:39 version -- app/version.sh@14 -- # tr -d '"' 00:06:29.176 01:47:39 version -- app/version.sh@19 -- # patch=0 00:06:29.176 01:47:39 version -- app/version.sh@20 -- # get_header_version suffix 00:06:29.176 01:47:39 version -- app/version.sh@14 -- # cut -f2 00:06:29.176 01:47:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:29.176 01:47:39 version -- app/version.sh@14 -- # tr -d '"' 00:06:29.176 01:47:39 version -- app/version.sh@20 -- # suffix=-pre 00:06:29.176 01:47:39 version -- app/version.sh@22 -- # version=25.1 00:06:29.176 01:47:39 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:29.176 01:47:39 version -- app/version.sh@28 -- # version=25.1rc0 00:06:29.176 01:47:39 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:29.176 01:47:39 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:29.176 01:47:39 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:29.176 01:47:39 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:29.176 00:06:29.176 real 0m0.257s 00:06:29.176 user 0m0.168s 00:06:29.176 sys 0m0.119s 00:06:29.176 ************************************ 00:06:29.176 END TEST version 00:06:29.176 ************************************ 00:06:29.176 01:47:39 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.176 01:47:39 version -- common/autotest_common.sh@10 -- # set +x 00:06:29.176 01:47:39 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:29.176 01:47:39 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:29.176 01:47:39 -- spdk/autotest.sh@194 -- # uname -s 00:06:29.176 01:47:39 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:29.176 01:47:39 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:29.176 01:47:39 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:06:29.176 01:47:39 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:06:29.176 01:47:39 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:29.176 01:47:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.176 01:47:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.176 01:47:39 -- common/autotest_common.sh@10 -- # set +x 00:06:29.436 ************************************ 00:06:29.436 START TEST spdk_dd 00:06:29.436 ************************************ 00:06:29.436 01:47:39 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:29.436 * Looking for test storage... 00:06:29.436 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:29.436 01:47:39 spdk_dd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:29.436 01:47:39 spdk_dd -- common/autotest_common.sh@1693 -- # lcov --version 00:06:29.436 01:47:39 spdk_dd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:29.436 01:47:39 spdk_dd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:29.436 01:47:39 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.436 01:47:39 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.436 01:47:39 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.436 01:47:39 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.436 01:47:39 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.436 01:47:39 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.436 01:47:39 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.436 01:47:39 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.436 01:47:39 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.436 01:47:39 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.436 01:47:39 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.436 01:47:39 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:06:29.436 01:47:39 spdk_dd -- scripts/common.sh@345 -- # : 1 00:06:29.436 01:47:39 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.436 01:47:39 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.436 01:47:39 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:06:29.436 01:47:39 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:06:29.436 01:47:39 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.436 01:47:39 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:06:29.436 01:47:39 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.436 01:47:39 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:06:29.436 01:47:39 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:06:29.436 01:47:39 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.436 01:47:39 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:06:29.436 01:47:39 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.436 01:47:39 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.436 01:47:39 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.436 01:47:39 spdk_dd -- scripts/common.sh@368 -- # return 0 00:06:29.436 01:47:39 spdk_dd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.436 01:47:39 spdk_dd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:29.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.436 --rc genhtml_branch_coverage=1 00:06:29.436 --rc genhtml_function_coverage=1 00:06:29.436 --rc genhtml_legend=1 00:06:29.436 --rc geninfo_all_blocks=1 00:06:29.436 --rc geninfo_unexecuted_blocks=1 00:06:29.436 00:06:29.436 ' 00:06:29.436 01:47:39 spdk_dd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:29.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.436 --rc genhtml_branch_coverage=1 00:06:29.436 --rc genhtml_function_coverage=1 00:06:29.436 --rc genhtml_legend=1 00:06:29.436 --rc geninfo_all_blocks=1 00:06:29.436 --rc geninfo_unexecuted_blocks=1 00:06:29.436 00:06:29.436 ' 00:06:29.436 01:47:39 spdk_dd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:29.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.436 --rc genhtml_branch_coverage=1 00:06:29.436 --rc genhtml_function_coverage=1 00:06:29.436 --rc genhtml_legend=1 00:06:29.436 --rc geninfo_all_blocks=1 00:06:29.436 --rc geninfo_unexecuted_blocks=1 00:06:29.436 00:06:29.436 ' 00:06:29.436 01:47:39 spdk_dd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:29.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.436 --rc genhtml_branch_coverage=1 00:06:29.436 --rc genhtml_function_coverage=1 00:06:29.436 --rc genhtml_legend=1 00:06:29.436 --rc geninfo_all_blocks=1 00:06:29.436 --rc geninfo_unexecuted_blocks=1 00:06:29.436 00:06:29.436 ' 00:06:29.436 01:47:39 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:29.436 01:47:39 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:06:29.436 01:47:39 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:29.436 01:47:39 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:29.436 01:47:39 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:29.436 01:47:39 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.437 01:47:39 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.437 01:47:39 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.437 01:47:39 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:29.437 01:47:39 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.437 01:47:39 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:29.695 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:29.695 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:29.695 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:29.955 01:47:40 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:29.955 01:47:40 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:29.955 01:47:40 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:06:29.955 01:47:40 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:06:29.955 01:47:40 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:06:29.955 01:47:40 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:29.955 01:47:40 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:06:29.955 01:47:40 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:06:29.955 01:47:40 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:06:29.955 01:47:40 spdk_dd -- scripts/common.sh@233 -- # local class 00:06:29.955 01:47:40 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:06:29.955 01:47:40 spdk_dd -- scripts/common.sh@235 -- # local progif 00:06:29.955 01:47:40 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:06:29.955 01:47:40 spdk_dd -- scripts/common.sh@236 -- # class=01 00:06:29.955 01:47:40 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:06:29.955 01:47:40 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:06:29.955 01:47:40 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:06:29.955 01:47:40 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:06:29.955 01:47:40 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:06:29.955 01:47:40 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:06:29.955 01:47:40 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:06:29.955 01:47:40 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:06:29.955 01:47:40 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:29.955 01:47:40 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:06:29.955 01:47:40 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:29.955 01:47:40 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:06:29.955 01:47:40 spdk_dd -- scripts/common.sh@18 -- # local i 00:06:29.955 01:47:40 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:06:29.955 01:47:40 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:29.955 01:47:40 spdk_dd -- scripts/common.sh@27 -- # return 0 00:06:29.955 01:47:40 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:06:29.955 01:47:40 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:29.955 01:47:40 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:06:29.955 01:47:40 spdk_dd -- scripts/common.sh@18 -- # local i 00:06:29.956 01:47:40 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:06:29.956 01:47:40 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:29.956 01:47:40 spdk_dd -- scripts/common.sh@27 -- # return 0 00:06:29.956 01:47:40 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:06:29.956 01:47:40 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:06:29.956 01:47:40 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:29.956 01:47:40 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:06:29.956 01:47:40 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:06:29.956 01:47:40 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:06:29.956 01:47:40 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:06:29.956 01:47:40 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:29.956 01:47:40 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:06:29.956 01:47:40 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:06:29.956 01:47:40 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:06:29.956 01:47:40 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:06:29.956 01:47:40 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:29.956 01:47:40 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@139 -- # local lib 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.956 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.23 == liburing.so.* ]] 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.23 == liburing.so.* ]] 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.23 == liburing.so.* ]] 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.23 == liburing.so.* ]] 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.23 == liburing.so.* ]] 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.23 == liburing.so.* ]] 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.23 == liburing.so.* ]] 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.23 == liburing.so.* ]] 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.23 == liburing.so.* ]] 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.23 == liburing.so.* ]] 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.23 == liburing.so.* ]] 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.23 == liburing.so.* ]] 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.23 == liburing.so.* ]] 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.23 == liburing.so.* ]] 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.23 == liburing.so.* ]] 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.23 == liburing.so.* ]] 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.23 == liburing.so.* ]] 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:29.957 * spdk_dd linked to liburing 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:06:29.957 01:47:40 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:06:29.957 01:47:40 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:29.958 01:47:40 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:06:29.958 01:47:40 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:06:29.958 01:47:40 spdk_dd -- dd/common.sh@153 -- # return 0 00:06:29.958 01:47:40 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:29.958 01:47:40 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:29.958 01:47:40 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:29.958 01:47:40 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.958 01:47:40 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:29.958 ************************************ 00:06:29.958 START TEST spdk_dd_basic_rw 00:06:29.958 ************************************ 00:06:29.958 01:47:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:29.958 * Looking for test storage... 00:06:29.958 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:29.958 01:47:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:29.958 01:47:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lcov --version 00:06:29.958 01:47:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:30.217 01:47:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:30.217 01:47:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.217 01:47:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.217 01:47:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.217 01:47:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.217 01:47:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.217 01:47:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.217 01:47:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.217 01:47:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.217 01:47:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.217 01:47:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.217 01:47:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.217 01:47:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:06:30.217 01:47:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:06:30.217 01:47:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.217 01:47:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.217 01:47:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:06:30.217 01:47:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:06:30.217 01:47:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.217 01:47:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:06:30.217 01:47:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.217 01:47:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:06:30.217 01:47:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:06:30.217 01:47:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.217 01:47:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:06:30.217 01:47:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.217 01:47:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.217 01:47:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.217 01:47:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:06:30.217 01:47:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.217 01:47:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:30.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.217 --rc genhtml_branch_coverage=1 00:06:30.217 --rc genhtml_function_coverage=1 00:06:30.217 --rc genhtml_legend=1 00:06:30.217 --rc geninfo_all_blocks=1 00:06:30.217 --rc geninfo_unexecuted_blocks=1 00:06:30.217 00:06:30.217 ' 00:06:30.217 01:47:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:30.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.217 --rc genhtml_branch_coverage=1 00:06:30.217 --rc genhtml_function_coverage=1 00:06:30.217 --rc genhtml_legend=1 00:06:30.217 --rc geninfo_all_blocks=1 00:06:30.217 --rc geninfo_unexecuted_blocks=1 00:06:30.217 00:06:30.217 ' 00:06:30.217 01:47:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:30.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.217 --rc genhtml_branch_coverage=1 00:06:30.217 --rc genhtml_function_coverage=1 00:06:30.217 --rc genhtml_legend=1 00:06:30.217 --rc geninfo_all_blocks=1 00:06:30.218 --rc geninfo_unexecuted_blocks=1 00:06:30.218 00:06:30.218 ' 00:06:30.218 01:47:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:30.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.218 --rc genhtml_branch_coverage=1 00:06:30.218 --rc genhtml_function_coverage=1 00:06:30.218 --rc genhtml_legend=1 00:06:30.218 --rc geninfo_all_blocks=1 00:06:30.218 --rc geninfo_unexecuted_blocks=1 00:06:30.218 00:06:30.218 ' 00:06:30.218 01:47:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:30.218 01:47:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:06:30.218 01:47:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.218 01:47:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.218 01:47:40 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.218 01:47:40 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.218 01:47:40 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.218 01:47:40 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.218 01:47:40 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:30.218 01:47:40 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.218 01:47:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:30.218 01:47:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:30.218 01:47:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:30.218 01:47:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:30.218 01:47:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:30.218 01:47:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:30.218 01:47:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:30.218 01:47:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:30.218 01:47:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:30.218 01:47:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:30.218 01:47:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:30.218 01:47:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:30.218 01:47:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:30.480 01:47:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:30.480 01:47:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:30.481 01:47:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:30.481 01:47:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:30.481 01:47:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:30.481 01:47:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:30.481 01:47:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:30.481 01:47:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:30.481 01:47:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:30.481 01:47:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:30.481 01:47:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:30.481 01:47:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:30.481 01:47:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.481 01:47:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:30.481 ************************************ 00:06:30.481 START TEST dd_bs_lt_native_bs 00:06:30.481 ************************************ 00:06:30.481 01:47:40 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:30.481 01:47:40 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:06:30.481 01:47:40 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:30.481 01:47:40 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.481 01:47:40 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.481 01:47:40 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.481 01:47:40 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.481 01:47:40 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.481 01:47:40 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:30.481 01:47:40 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:30.481 01:47:40 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:30.481 01:47:40 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:30.481 { 00:06:30.481 "subsystems": [ 00:06:30.481 { 00:06:30.481 "subsystem": "bdev", 00:06:30.481 "config": [ 00:06:30.481 { 00:06:30.481 "params": { 00:06:30.481 "trtype": "pcie", 00:06:30.481 "traddr": "0000:00:10.0", 00:06:30.481 "name": "Nvme0" 00:06:30.481 }, 00:06:30.481 "method": "bdev_nvme_attach_controller" 00:06:30.481 }, 00:06:30.481 { 00:06:30.481 "method": "bdev_wait_for_examine" 00:06:30.481 } 00:06:30.481 ] 00:06:30.481 } 00:06:30.481 ] 00:06:30.481 } 00:06:30.481 [2024-11-19 01:47:40.905164] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:30.481 [2024-11-19 01:47:40.905283] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71769 ] 00:06:30.481 [2024-11-19 01:47:41.056987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.482 [2024-11-19 01:47:41.080654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.741 [2024-11-19 01:47:41.115496] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:30.741 [2024-11-19 01:47:41.209164] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:30.741 [2024-11-19 01:47:41.209249] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:30.741 [2024-11-19 01:47:41.284008] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:30.741 01:47:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:06:30.741 01:47:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:30.741 01:47:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:06:30.741 01:47:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:06:30.741 01:47:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:06:30.741 01:47:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:30.741 00:06:30.741 real 0m0.486s 00:06:30.741 user 0m0.316s 00:06:30.741 sys 0m0.124s 00:06:30.741 01:47:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.741 01:47:41 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:30.741 ************************************ 00:06:30.741 END TEST dd_bs_lt_native_bs 00:06:30.741 ************************************ 00:06:31.000 01:47:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:31.000 01:47:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:31.000 01:47:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.000 01:47:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:31.000 ************************************ 00:06:31.000 START TEST dd_rw 00:06:31.000 ************************************ 00:06:31.000 01:47:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:06:31.000 01:47:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:31.000 01:47:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:31.000 01:47:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:31.000 01:47:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:31.000 01:47:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:31.000 01:47:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:31.000 01:47:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:31.000 01:47:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:31.000 01:47:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:31.000 01:47:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:31.000 01:47:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:31.000 01:47:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:31.000 01:47:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:31.000 01:47:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:31.000 01:47:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:31.000 01:47:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:31.000 01:47:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:31.000 01:47:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:31.569 01:47:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:31.569 01:47:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:31.569 01:47:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:31.569 01:47:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:31.569 [2024-11-19 01:47:42.038972] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:31.569 [2024-11-19 01:47:42.039691] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71800 ] 00:06:31.569 { 00:06:31.569 "subsystems": [ 00:06:31.569 { 00:06:31.569 "subsystem": "bdev", 00:06:31.569 "config": [ 00:06:31.569 { 00:06:31.569 "params": { 00:06:31.569 "trtype": "pcie", 00:06:31.569 "traddr": "0000:00:10.0", 00:06:31.569 "name": "Nvme0" 00:06:31.569 }, 00:06:31.569 "method": "bdev_nvme_attach_controller" 00:06:31.569 }, 00:06:31.569 { 00:06:31.569 "method": "bdev_wait_for_examine" 00:06:31.569 } 00:06:31.569 ] 00:06:31.569 } 00:06:31.569 ] 00:06:31.569 } 00:06:31.828 [2024-11-19 01:47:42.193151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.828 [2024-11-19 01:47:42.217907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.828 [2024-11-19 01:47:42.251483] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:31.828  [2024-11-19T01:47:42.702Z] Copying: 60/60 [kB] (average 19 MBps) 00:06:32.087 00:06:32.087 01:47:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:32.087 01:47:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:32.087 01:47:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:32.087 01:47:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:32.087 [2024-11-19 01:47:42.498113] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:32.087 [2024-11-19 01:47:42.498196] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71813 ] 00:06:32.088 { 00:06:32.088 "subsystems": [ 00:06:32.088 { 00:06:32.088 "subsystem": "bdev", 00:06:32.088 "config": [ 00:06:32.088 { 00:06:32.088 "params": { 00:06:32.088 "trtype": "pcie", 00:06:32.088 "traddr": "0000:00:10.0", 00:06:32.088 "name": "Nvme0" 00:06:32.088 }, 00:06:32.088 "method": "bdev_nvme_attach_controller" 00:06:32.088 }, 00:06:32.088 { 00:06:32.088 "method": "bdev_wait_for_examine" 00:06:32.088 } 00:06:32.088 ] 00:06:32.088 } 00:06:32.088 ] 00:06:32.088 } 00:06:32.088 [2024-11-19 01:47:42.635649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.088 [2024-11-19 01:47:42.654788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.088 [2024-11-19 01:47:42.681755] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.347  [2024-11-19T01:47:42.962Z] Copying: 60/60 [kB] (average 19 MBps) 00:06:32.347 00:06:32.347 01:47:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:32.347 01:47:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:32.347 01:47:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:32.347 01:47:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:32.347 01:47:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:32.347 01:47:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:32.347 01:47:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:32.347 01:47:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:32.347 01:47:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:32.347 01:47:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:32.347 01:47:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:32.347 [2024-11-19 01:47:42.944433] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:32.347 [2024-11-19 01:47:42.945288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71829 ] 00:06:32.347 { 00:06:32.347 "subsystems": [ 00:06:32.347 { 00:06:32.347 "subsystem": "bdev", 00:06:32.347 "config": [ 00:06:32.347 { 00:06:32.347 "params": { 00:06:32.347 "trtype": "pcie", 00:06:32.347 "traddr": "0000:00:10.0", 00:06:32.347 "name": "Nvme0" 00:06:32.347 }, 00:06:32.347 "method": "bdev_nvme_attach_controller" 00:06:32.347 }, 00:06:32.347 { 00:06:32.347 "method": "bdev_wait_for_examine" 00:06:32.347 } 00:06:32.347 ] 00:06:32.347 } 00:06:32.347 ] 00:06:32.347 } 00:06:32.606 [2024-11-19 01:47:43.089819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.606 [2024-11-19 01:47:43.107868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.606 [2024-11-19 01:47:43.135126] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.606  [2024-11-19T01:47:43.480Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:32.865 00:06:32.865 01:47:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:32.865 01:47:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:32.865 01:47:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:32.865 01:47:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:32.865 01:47:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:32.865 01:47:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:32.865 01:47:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:33.433 01:47:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:33.433 01:47:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:33.433 01:47:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:33.433 01:47:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:33.433 [2024-11-19 01:47:43.862031] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:33.433 [2024-11-19 01:47:43.862112] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71848 ] 00:06:33.433 { 00:06:33.433 "subsystems": [ 00:06:33.433 { 00:06:33.433 "subsystem": "bdev", 00:06:33.433 "config": [ 00:06:33.433 { 00:06:33.433 "params": { 00:06:33.433 "trtype": "pcie", 00:06:33.433 "traddr": "0000:00:10.0", 00:06:33.433 "name": "Nvme0" 00:06:33.433 }, 00:06:33.433 "method": "bdev_nvme_attach_controller" 00:06:33.433 }, 00:06:33.433 { 00:06:33.433 "method": "bdev_wait_for_examine" 00:06:33.433 } 00:06:33.433 ] 00:06:33.433 } 00:06:33.433 ] 00:06:33.433 } 00:06:33.433 [2024-11-19 01:47:43.996303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.433 [2024-11-19 01:47:44.016145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.433 [2024-11-19 01:47:44.044414] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.692  [2024-11-19T01:47:44.307Z] Copying: 60/60 [kB] (average 58 MBps) 00:06:33.692 00:06:33.692 01:47:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:33.692 01:47:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:33.692 01:47:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:33.692 01:47:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:33.692 { 00:06:33.692 "subsystems": [ 00:06:33.692 { 00:06:33.692 "subsystem": "bdev", 00:06:33.692 "config": [ 00:06:33.692 { 00:06:33.692 "params": { 00:06:33.692 "trtype": "pcie", 00:06:33.692 "traddr": "0000:00:10.0", 00:06:33.692 "name": "Nvme0" 00:06:33.692 }, 00:06:33.692 "method": "bdev_nvme_attach_controller" 00:06:33.692 }, 00:06:33.692 { 00:06:33.692 "method": "bdev_wait_for_examine" 00:06:33.692 } 00:06:33.692 ] 00:06:33.692 } 00:06:33.692 ] 00:06:33.692 } 00:06:33.692 [2024-11-19 01:47:44.303014] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:33.692 [2024-11-19 01:47:44.303123] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71856 ] 00:06:33.952 [2024-11-19 01:47:44.448319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.952 [2024-11-19 01:47:44.466496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.952 [2024-11-19 01:47:44.495474] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:34.212  [2024-11-19T01:47:44.827Z] Copying: 60/60 [kB] (average 58 MBps) 00:06:34.212 00:06:34.212 01:47:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:34.212 01:47:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:34.212 01:47:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:34.212 01:47:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:34.212 01:47:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:34.212 01:47:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:34.212 01:47:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:34.212 01:47:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:34.212 01:47:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:34.212 01:47:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:34.212 01:47:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:34.212 [2024-11-19 01:47:44.774056] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:34.212 [2024-11-19 01:47:44.774834] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71871 ] 00:06:34.212 { 00:06:34.212 "subsystems": [ 00:06:34.212 { 00:06:34.212 "subsystem": "bdev", 00:06:34.212 "config": [ 00:06:34.212 { 00:06:34.212 "params": { 00:06:34.212 "trtype": "pcie", 00:06:34.212 "traddr": "0000:00:10.0", 00:06:34.212 "name": "Nvme0" 00:06:34.212 }, 00:06:34.212 "method": "bdev_nvme_attach_controller" 00:06:34.212 }, 00:06:34.212 { 00:06:34.212 "method": "bdev_wait_for_examine" 00:06:34.212 } 00:06:34.212 ] 00:06:34.212 } 00:06:34.212 ] 00:06:34.212 } 00:06:34.472 [2024-11-19 01:47:44.923777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.472 [2024-11-19 01:47:44.943589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.472 [2024-11-19 01:47:44.971577] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:34.472  [2024-11-19T01:47:45.347Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:34.732 00:06:34.732 01:47:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:34.732 01:47:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:34.732 01:47:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:34.732 01:47:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:34.732 01:47:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:34.732 01:47:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:34.732 01:47:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:34.732 01:47:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:35.300 01:47:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:35.300 01:47:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:35.300 01:47:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:35.300 01:47:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:35.300 [2024-11-19 01:47:45.743275] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:35.300 [2024-11-19 01:47:45.743983] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71890 ] 00:06:35.300 { 00:06:35.300 "subsystems": [ 00:06:35.300 { 00:06:35.300 "subsystem": "bdev", 00:06:35.300 "config": [ 00:06:35.300 { 00:06:35.300 "params": { 00:06:35.300 "trtype": "pcie", 00:06:35.300 "traddr": "0000:00:10.0", 00:06:35.300 "name": "Nvme0" 00:06:35.300 }, 00:06:35.300 "method": "bdev_nvme_attach_controller" 00:06:35.300 }, 00:06:35.300 { 00:06:35.300 "method": "bdev_wait_for_examine" 00:06:35.300 } 00:06:35.300 ] 00:06:35.300 } 00:06:35.300 ] 00:06:35.300 } 00:06:35.300 [2024-11-19 01:47:45.888055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.300 [2024-11-19 01:47:45.907196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.559 [2024-11-19 01:47:45.936442] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:35.559  [2024-11-19T01:47:46.174Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:35.559 00:06:35.559 01:47:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:35.559 01:47:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:35.559 01:47:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:35.559 01:47:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:35.818 [2024-11-19 01:47:46.182263] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:35.818 [2024-11-19 01:47:46.182358] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71904 ] 00:06:35.818 { 00:06:35.818 "subsystems": [ 00:06:35.818 { 00:06:35.818 "subsystem": "bdev", 00:06:35.819 "config": [ 00:06:35.819 { 00:06:35.819 "params": { 00:06:35.819 "trtype": "pcie", 00:06:35.819 "traddr": "0000:00:10.0", 00:06:35.819 "name": "Nvme0" 00:06:35.819 }, 00:06:35.819 "method": "bdev_nvme_attach_controller" 00:06:35.819 }, 00:06:35.819 { 00:06:35.819 "method": "bdev_wait_for_examine" 00:06:35.819 } 00:06:35.819 ] 00:06:35.819 } 00:06:35.819 ] 00:06:35.819 } 00:06:35.819 [2024-11-19 01:47:46.326022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.819 [2024-11-19 01:47:46.345027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.819 [2024-11-19 01:47:46.377151] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.078  [2024-11-19T01:47:46.693Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:36.078 00:06:36.078 01:47:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:36.078 01:47:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:36.078 01:47:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:36.078 01:47:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:36.078 01:47:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:36.078 01:47:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:36.078 01:47:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:36.078 01:47:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:36.078 01:47:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:36.078 01:47:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:36.078 01:47:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:36.078 [2024-11-19 01:47:46.631833] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:36.078 [2024-11-19 01:47:46.632458] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71919 ] 00:06:36.078 { 00:06:36.078 "subsystems": [ 00:06:36.078 { 00:06:36.078 "subsystem": "bdev", 00:06:36.078 "config": [ 00:06:36.078 { 00:06:36.078 "params": { 00:06:36.078 "trtype": "pcie", 00:06:36.078 "traddr": "0000:00:10.0", 00:06:36.078 "name": "Nvme0" 00:06:36.078 }, 00:06:36.078 "method": "bdev_nvme_attach_controller" 00:06:36.078 }, 00:06:36.078 { 00:06:36.078 "method": "bdev_wait_for_examine" 00:06:36.078 } 00:06:36.078 ] 00:06:36.078 } 00:06:36.078 ] 00:06:36.078 } 00:06:36.337 [2024-11-19 01:47:46.781360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.337 [2024-11-19 01:47:46.799976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.337 [2024-11-19 01:47:46.830275] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.337  [2024-11-19T01:47:47.211Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:36.596 00:06:36.596 01:47:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:36.596 01:47:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:36.596 01:47:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:36.596 01:47:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:36.596 01:47:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:36.596 01:47:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:36.596 01:47:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:37.164 01:47:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:37.164 01:47:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:37.164 01:47:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:37.164 01:47:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:37.164 [2024-11-19 01:47:47.609845] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:37.164 [2024-11-19 01:47:47.609945] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71938 ] 00:06:37.164 { 00:06:37.164 "subsystems": [ 00:06:37.164 { 00:06:37.164 "subsystem": "bdev", 00:06:37.164 "config": [ 00:06:37.164 { 00:06:37.164 "params": { 00:06:37.164 "trtype": "pcie", 00:06:37.164 "traddr": "0000:00:10.0", 00:06:37.164 "name": "Nvme0" 00:06:37.164 }, 00:06:37.164 "method": "bdev_nvme_attach_controller" 00:06:37.164 }, 00:06:37.164 { 00:06:37.164 "method": "bdev_wait_for_examine" 00:06:37.164 } 00:06:37.164 ] 00:06:37.164 } 00:06:37.164 ] 00:06:37.164 } 00:06:37.164 [2024-11-19 01:47:47.756283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.164 [2024-11-19 01:47:47.775839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.423 [2024-11-19 01:47:47.805554] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.423  [2024-11-19T01:47:48.038Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:37.423 00:06:37.423 01:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:37.423 01:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:37.423 01:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:37.423 01:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:37.682 [2024-11-19 01:47:48.056517] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:37.682 [2024-11-19 01:47:48.056636] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71952 ] 00:06:37.682 { 00:06:37.682 "subsystems": [ 00:06:37.682 { 00:06:37.682 "subsystem": "bdev", 00:06:37.682 "config": [ 00:06:37.682 { 00:06:37.682 "params": { 00:06:37.682 "trtype": "pcie", 00:06:37.682 "traddr": "0000:00:10.0", 00:06:37.682 "name": "Nvme0" 00:06:37.682 }, 00:06:37.682 "method": "bdev_nvme_attach_controller" 00:06:37.682 }, 00:06:37.682 { 00:06:37.682 "method": "bdev_wait_for_examine" 00:06:37.682 } 00:06:37.682 ] 00:06:37.682 } 00:06:37.682 ] 00:06:37.682 } 00:06:37.682 [2024-11-19 01:47:48.201152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.682 [2024-11-19 01:47:48.220399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.682 [2024-11-19 01:47:48.248430] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.941  [2024-11-19T01:47:48.556Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:37.941 00:06:37.941 01:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:37.941 01:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:37.941 01:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:37.941 01:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:37.941 01:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:37.941 01:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:37.941 01:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:37.941 01:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:37.941 01:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:37.941 01:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:37.941 01:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:37.941 [2024-11-19 01:47:48.506054] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:37.941 [2024-11-19 01:47:48.506566] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71966 ] 00:06:37.941 { 00:06:37.941 "subsystems": [ 00:06:37.941 { 00:06:37.941 "subsystem": "bdev", 00:06:37.941 "config": [ 00:06:37.941 { 00:06:37.941 "params": { 00:06:37.941 "trtype": "pcie", 00:06:37.941 "traddr": "0000:00:10.0", 00:06:37.941 "name": "Nvme0" 00:06:37.941 }, 00:06:37.941 "method": "bdev_nvme_attach_controller" 00:06:37.941 }, 00:06:37.941 { 00:06:37.941 "method": "bdev_wait_for_examine" 00:06:37.941 } 00:06:37.941 ] 00:06:37.941 } 00:06:37.941 ] 00:06:37.941 } 00:06:38.200 [2024-11-19 01:47:48.650928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.200 [2024-11-19 01:47:48.670830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.200 [2024-11-19 01:47:48.701884] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:38.200  [2024-11-19T01:47:49.074Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:38.459 00:06:38.459 01:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:38.459 01:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:38.459 01:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:38.459 01:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:38.459 01:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:38.459 01:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:38.459 01:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:38.459 01:47:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:39.033 01:47:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:39.033 01:47:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:39.033 01:47:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:39.033 01:47:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:39.033 [2024-11-19 01:47:49.412822] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:39.033 [2024-11-19 01:47:49.412920] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71981 ] 00:06:39.033 { 00:06:39.033 "subsystems": [ 00:06:39.033 { 00:06:39.033 "subsystem": "bdev", 00:06:39.033 "config": [ 00:06:39.033 { 00:06:39.033 "params": { 00:06:39.033 "trtype": "pcie", 00:06:39.033 "traddr": "0000:00:10.0", 00:06:39.033 "name": "Nvme0" 00:06:39.033 }, 00:06:39.033 "method": "bdev_nvme_attach_controller" 00:06:39.033 }, 00:06:39.033 { 00:06:39.033 "method": "bdev_wait_for_examine" 00:06:39.033 } 00:06:39.033 ] 00:06:39.033 } 00:06:39.033 ] 00:06:39.033 } 00:06:39.033 [2024-11-19 01:47:49.557250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.033 [2024-11-19 01:47:49.575334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.033 [2024-11-19 01:47:49.604694] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.293  [2024-11-19T01:47:49.908Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:39.293 00:06:39.293 01:47:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:39.293 01:47:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:39.293 01:47:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:39.293 01:47:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:39.293 [2024-11-19 01:47:49.856148] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:39.293 [2024-11-19 01:47:49.856247] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71994 ] 00:06:39.293 { 00:06:39.293 "subsystems": [ 00:06:39.293 { 00:06:39.293 "subsystem": "bdev", 00:06:39.293 "config": [ 00:06:39.293 { 00:06:39.293 "params": { 00:06:39.293 "trtype": "pcie", 00:06:39.293 "traddr": "0000:00:10.0", 00:06:39.293 "name": "Nvme0" 00:06:39.293 }, 00:06:39.293 "method": "bdev_nvme_attach_controller" 00:06:39.293 }, 00:06:39.293 { 00:06:39.293 "method": "bdev_wait_for_examine" 00:06:39.293 } 00:06:39.293 ] 00:06:39.293 } 00:06:39.293 ] 00:06:39.293 } 00:06:39.552 [2024-11-19 01:47:50.001829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.552 [2024-11-19 01:47:50.024041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.552 [2024-11-19 01:47:50.052258] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.552  [2024-11-19T01:47:50.426Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:39.811 00:06:39.811 01:47:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:39.811 01:47:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:39.811 01:47:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:39.811 01:47:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:39.811 01:47:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:39.811 01:47:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:39.811 01:47:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:39.811 01:47:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:39.811 01:47:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:39.811 01:47:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:39.811 01:47:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:39.811 [2024-11-19 01:47:50.306627] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:39.811 [2024-11-19 01:47:50.306731] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72010 ] 00:06:39.811 { 00:06:39.811 "subsystems": [ 00:06:39.811 { 00:06:39.811 "subsystem": "bdev", 00:06:39.811 "config": [ 00:06:39.811 { 00:06:39.811 "params": { 00:06:39.811 "trtype": "pcie", 00:06:39.811 "traddr": "0000:00:10.0", 00:06:39.811 "name": "Nvme0" 00:06:39.811 }, 00:06:39.811 "method": "bdev_nvme_attach_controller" 00:06:39.811 }, 00:06:39.811 { 00:06:39.811 "method": "bdev_wait_for_examine" 00:06:39.811 } 00:06:39.811 ] 00:06:39.811 } 00:06:39.811 ] 00:06:39.811 } 00:06:40.071 [2024-11-19 01:47:50.450177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.071 [2024-11-19 01:47:50.468038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.071 [2024-11-19 01:47:50.496076] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:40.071  [2024-11-19T01:47:50.686Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:40.071 00:06:40.329 01:47:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:40.329 01:47:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:40.329 01:47:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:40.329 01:47:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:40.329 01:47:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:40.329 01:47:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:40.329 01:47:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:40.588 01:47:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:40.588 01:47:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:40.588 01:47:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:40.588 01:47:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:40.588 [2024-11-19 01:47:51.197290] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:40.588 [2024-11-19 01:47:51.197415] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72029 ] 00:06:40.588 { 00:06:40.588 "subsystems": [ 00:06:40.588 { 00:06:40.588 "subsystem": "bdev", 00:06:40.588 "config": [ 00:06:40.588 { 00:06:40.588 "params": { 00:06:40.588 "trtype": "pcie", 00:06:40.588 "traddr": "0000:00:10.0", 00:06:40.588 "name": "Nvme0" 00:06:40.588 }, 00:06:40.588 "method": "bdev_nvme_attach_controller" 00:06:40.588 }, 00:06:40.588 { 00:06:40.588 "method": "bdev_wait_for_examine" 00:06:40.588 } 00:06:40.588 ] 00:06:40.588 } 00:06:40.588 ] 00:06:40.588 } 00:06:40.848 [2024-11-19 01:47:51.349104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.848 [2024-11-19 01:47:51.372336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.848 [2024-11-19 01:47:51.406721] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:41.106  [2024-11-19T01:47:51.721Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:41.106 00:06:41.106 01:47:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:41.106 01:47:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:41.106 01:47:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:41.106 01:47:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:41.106 [2024-11-19 01:47:51.653799] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:41.106 [2024-11-19 01:47:51.654284] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72042 ] 00:06:41.106 { 00:06:41.106 "subsystems": [ 00:06:41.106 { 00:06:41.106 "subsystem": "bdev", 00:06:41.106 "config": [ 00:06:41.106 { 00:06:41.106 "params": { 00:06:41.106 "trtype": "pcie", 00:06:41.106 "traddr": "0000:00:10.0", 00:06:41.106 "name": "Nvme0" 00:06:41.106 }, 00:06:41.106 "method": "bdev_nvme_attach_controller" 00:06:41.106 }, 00:06:41.106 { 00:06:41.106 "method": "bdev_wait_for_examine" 00:06:41.106 } 00:06:41.106 ] 00:06:41.106 } 00:06:41.106 ] 00:06:41.106 } 00:06:41.366 [2024-11-19 01:47:51.792308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.366 [2024-11-19 01:47:51.810372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.366 [2024-11-19 01:47:51.837189] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:41.366  [2024-11-19T01:47:52.240Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:41.625 00:06:41.625 01:47:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:41.625 01:47:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:41.625 01:47:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:41.625 01:47:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:41.625 01:47:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:41.625 01:47:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:41.625 01:47:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:41.625 01:47:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:41.625 01:47:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:41.625 01:47:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:41.625 01:47:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:41.625 [2024-11-19 01:47:52.081547] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:41.625 [2024-11-19 01:47:52.081642] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72058 ] 00:06:41.625 { 00:06:41.625 "subsystems": [ 00:06:41.625 { 00:06:41.625 "subsystem": "bdev", 00:06:41.625 "config": [ 00:06:41.625 { 00:06:41.625 "params": { 00:06:41.625 "trtype": "pcie", 00:06:41.625 "traddr": "0000:00:10.0", 00:06:41.625 "name": "Nvme0" 00:06:41.625 }, 00:06:41.625 "method": "bdev_nvme_attach_controller" 00:06:41.625 }, 00:06:41.625 { 00:06:41.625 "method": "bdev_wait_for_examine" 00:06:41.625 } 00:06:41.625 ] 00:06:41.625 } 00:06:41.625 ] 00:06:41.625 } 00:06:41.625 [2024-11-19 01:47:52.215770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.625 [2024-11-19 01:47:52.233815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.884 [2024-11-19 01:47:52.260920] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:41.884  [2024-11-19T01:47:52.499Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:41.884 00:06:41.884 00:06:41.884 real 0m11.068s 00:06:41.884 user 0m8.212s 00:06:41.884 sys 0m3.470s 00:06:41.884 01:47:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.884 01:47:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:41.884 ************************************ 00:06:41.884 END TEST dd_rw 00:06:41.884 ************************************ 00:06:41.884 01:47:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:41.884 01:47:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.884 01:47:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.884 01:47:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:42.143 ************************************ 00:06:42.143 START TEST dd_rw_offset 00:06:42.143 ************************************ 00:06:42.143 01:47:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:06:42.143 01:47:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:42.143 01:47:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:42.143 01:47:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:06:42.143 01:47:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:42.143 01:47:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:42.144 01:47:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=87l8vg66ej081kogx2z7p63xzr2o7njj6rdxueym0lhrihasheiwekgoo2582nw90mmg3x4snc2jmmcor82zj2qwdodhx10fprq1juvig86wxm0h39kf5u5jn03k2jqee7k6obxvmf07hv4ftwsfi40eft9z3xh86gikwtmuacj1puol94g21f2vri8iev3l2xtaysq6m0om4h1jou38m5n7u8mbeif53xmhnhezvyaim3m3vc0kjg64vi08y9hip2grumy9m2nysj737ax7miukyr0qa6y759w4kxaccnj2exgh94qpjci3l3ef1bz5d9by682xxsh6bc5eevqzf0i5fk85pax4ims53ts3j7fqpma4rwqhz0sj4ma7t315730b4e9dnwzeggp7x3dkjdsvtvkagg0cgn24k9d9kgkgn8f2srlqhnvihg9tx9bmj6kl5z0bcqeh9qrk9a1kag4jolrdmu5r2vxem9t07v7nd8rbfqj8737qn3bw25s2dnpd2nyzoum6kw8exxsjbjwitwglh5ds1uj3nbjnig0044ux9g78up44i7d7e8mj1t9g4rnfg1cs7n0mofwdo9nab3tu37f20kgfpuqylg0sgz5f9510ohn889eysl9ef4sdqtyop1d6rl7vobvjflvr089urvgsczn4gzhh4x0muxzynpw860rshwdf2ar4t2pzgkwl9myti99kztkuucpjh8rsd7slms645egoyol081h1jkfzffy3as51i6qmt9gd2ignkuo9w8nz44fbali3f8aczr05yfs8hin6brezqvi0f0qunmqcr76o8q3hmp2q2ykseenqz1pxrrw0phkbftzefag6vxxvnrdzrm81q20l0rq9e0iq418imfb9ew3hv8jxfnoihan2o5qr2978t0cikwmecuzqycq6yyrjsk9ywk1qbr5qujvkbyykozv61x39vhny07d0htj035s02ubvz19fh15h6oi5hed6e6ev2b7fm3g7lytze7qe2xo09fzf27yrwtt23xwljec3gfangcqezc5cmo50sfrcusmgtyt1ekqjefzxcnbbb1fvcrdjkuqc2eovg2wl2rgoi9okmgflwp8jdjjuerrls2hfbvg48jyni3kbfuow7xeu492hgb9f9cvi1axazquz6lcpr7m6qts5nrnppk5qcflnkhro6rdv45k5y3znyvyaks935dc3ypf1ny5z70xfb5qex2yce5oky566c85oy3zhmeky0tnyp0k575vhvn6rqt33h15h3779ark1rafvv6rvfr6gqia0gwxrul4o2ep5b2v2q1onivbopnfml1x3zyhsyhrjplpu72rn4ilqi6vzsejdzx3am2ksz2bnl5j2ezmlcmqqnzeb4p7jxyiy7ur739z1d6uh72i1dqk5d0xduc5ep8oqqw9lg5de44xt1ryezo0qkjyt7nucnhjjvyve7bbxqzw12q7kv5czjqrma1tq5qjawnmey4x1x3cialteft06d4q2whv7zrx0jov33ghao5n87gai4lhpuakwol6orr1431mxwjbjd1qdpjk4uvcna2baviwa5pjtcslm0o6ybey2qv1ay5yvvcq6ygfsviw1sgtlw2ei1uzewslwtqyy0qttqhgsepdzldduejw1n7esgt0uc3abeuypipuesa0h6ix9m5c2a4q2ljhtvqsgxleoruxcs4789sszdk5tm1nukl2e118fgdnfhrsh1qa23vedfkmizu8tlcnvn3l5u6a6fpxsc7ibw1j2q7588tvklu19dd18qzc3gfh3oof9e20i2vuglfzhhl0mfmghhd8x5rb9gevfctm20zzlj2vc8sny457ee95cawb0p0xcl2zmxlwjahz8wdt6p4hpq43mma1b23k3x9lpkxiy61ytwzsratoiq89xa70549s1ilqdzofvvxdl233hpvfhk7jdzabw5kfmp9bj2k8n10umc1qb8gfveicrkdvrbaclymhsfrfi46yfvof883yl8ohwfkewczmebhb7x9n1s6slsfbodoum3l10j21jfp0ojomj62ip2wk5roimkrksov9u6qrjsz9xom4y6r749b437yke0a68pz4q5qduq99qww1nwc7kjxgrnvz8u40c4djvzvhxilsyc56b8fhxycfe51ix24a0po0mjn9dqe56x1p79k5nczdr7mswp5nusu199trnritpyerhdeo1g0eooyg9tasrcbxsxhsd40rknxic0y80nmagcxlqzyf7vzoybu12yuqsh4d6koorlzt69j63jln626962akomkxusn6xbdybll38zsju3su8caa3uw6ubdtk9owg8mzt921rbsvgkyjzgwof82jyfk75flqrfp65ruwlypoh0hy35rim35pqoopg3vml7dalbntynbhoghhfg60n0j6fvd2iehrreidtk7byete8po4bjsdwglgz4cwrywso6i3m6p7kvso5k6g5bhhz9rho2t1bjunumjfgagcgybkp4ne9yu2xu5f98vcv345wmige65dx4wp3jus2jotdkjkh8zo4w254a67f0kcaq4kslpa2dqol9x82n9c2p4wvs7rq97urwzyq2dqcoklure8ndwyn24nsmeo66e1ku9cam1z5m1so8g5d67yn33w6sjtcspneivkhssk1oh8v2ny0g1zyykqjsm49ucjqqryxoyb2y7oh8zqwjhfxhv1i7mwhsxx5i5sxqcibasdr81idgkgsmpapljik1dyhn2k167rza43vcqswrmsyfeoke3df9tkkwwq9n417u56bm39b7vx1hiukslcz8ol6lu92d5o1ndgcmutxcmxh7wuqx3x5elb0axdlcil6tdckjud7j298dr02po6a7w5hm3hpt8dxsxpgxjnjcumub4r2va48c5zh7hmyg0ueospg16udkd0bqpb1awfp29k4dquonrw1e06c9teyx8q8o0e6plz6gvmll815itzsw1wh4f0is92aimdijvs43v5kffyvaeoxxh7wdfss2o83lzx8mh7alztrrfb5ht3ct0gadm2d32yfxa3umbthnla3sflodmcp73wnp5xbmzy53odnuwej7zrmnaag7kv959v1bbpyai5bnr7bd3di6xx0w7jkmgn5dahz3yxxu4sjszan41nss3638b0lnxf4uzvl4kvgznj1q3cx8pl257uvrf3t9y1muvy0oh97c8u10dl8gb1lb2r3erwwomujnito3cd0hq31vc0pxh5hlod47w05aaqilh2u8ciww4e7dg5d3ie7mak7u6fhn61ug6uj5sj98hpuv4ueav5d3h8wps3unb6y96doodz01nwmeejwrdtt4ahs55mu92yqsusmhha0f1piuztmz6ecnywrycc1kd2fj6z9g6rjcmorno3mux1433oiltlmijxdfhpakxhxjqfju07hnwkfl2uoa5so1atac30jwted7mr1eemzwsc5bqutdelj04cwmqqcfa2cx3b9xd0jabhx8f1tmcf86i5u1fhpulnz17o23fnip3ykuh8ja4ajud54tdr2pb5bke6yre6kl7hmyjtc6ca0tg2vjkrohpjbk3425eea1q09g6komslhvxwx4no0kajk9pow4kf5xvyp6o6jihqnmeq330bgji3f14eq7qz1f78l0ca1wrkyn7rj4qqsq7wpmd7tl0eoq3z4sswx8ivese3scys1ex74ayvn1p5aljzbwci8hjadfk8ljss0qv8qmaf9pxk4ba03gtzf1x8akz7ud23a3p9gbih4xdcvn8tj29uxdx2pqblcjsirt0aiff2vqqu7thcs4tazvavcaetc52z1pybw0grq6ci3ihtazgbos4wxmis86chdoi9zw4shk2rs4357tmk1t8iw9vc4rix671vl7j6muh6pngtvs4ojgheyzglvwic8itbsk72og1sgw2ms7n87dvurnaghznshtryyj8yqcwmnogofupqhwf6t6ccbehby69hewhrmg9ldan86bolffixtermiip30ixpe 00:06:42.144 01:47:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:42.144 01:47:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:06:42.144 01:47:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:42.144 01:47:52 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:42.144 [2024-11-19 01:47:52.628201] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:42.144 [2024-11-19 01:47:52.628336] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72083 ] 00:06:42.144 { 00:06:42.144 "subsystems": [ 00:06:42.144 { 00:06:42.144 "subsystem": "bdev", 00:06:42.144 "config": [ 00:06:42.144 { 00:06:42.144 "params": { 00:06:42.144 "trtype": "pcie", 00:06:42.144 "traddr": "0000:00:10.0", 00:06:42.144 "name": "Nvme0" 00:06:42.144 }, 00:06:42.144 "method": "bdev_nvme_attach_controller" 00:06:42.144 }, 00:06:42.144 { 00:06:42.144 "method": "bdev_wait_for_examine" 00:06:42.144 } 00:06:42.144 ] 00:06:42.144 } 00:06:42.144 ] 00:06:42.144 } 00:06:42.403 [2024-11-19 01:47:52.780498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.403 [2024-11-19 01:47:52.798875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.403 [2024-11-19 01:47:52.825290] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:42.403  [2024-11-19T01:47:53.018Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:42.403 00:06:42.662 01:47:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:42.662 01:47:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:06:42.662 01:47:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:42.662 01:47:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:42.662 [2024-11-19 01:47:53.075136] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:42.662 [2024-11-19 01:47:53.075752] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72102 ] 00:06:42.662 { 00:06:42.662 "subsystems": [ 00:06:42.662 { 00:06:42.662 "subsystem": "bdev", 00:06:42.662 "config": [ 00:06:42.662 { 00:06:42.662 "params": { 00:06:42.662 "trtype": "pcie", 00:06:42.662 "traddr": "0000:00:10.0", 00:06:42.662 "name": "Nvme0" 00:06:42.662 }, 00:06:42.662 "method": "bdev_nvme_attach_controller" 00:06:42.662 }, 00:06:42.662 { 00:06:42.662 "method": "bdev_wait_for_examine" 00:06:42.662 } 00:06:42.662 ] 00:06:42.662 } 00:06:42.662 ] 00:06:42.662 } 00:06:42.662 [2024-11-19 01:47:53.220946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.662 [2024-11-19 01:47:53.239466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.662 [2024-11-19 01:47:53.266540] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:42.941  [2024-11-19T01:47:53.556Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:42.941 00:06:42.941 01:47:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:42.941 ************************************ 00:06:42.942 01:47:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 87l8vg66ej081kogx2z7p63xzr2o7njj6rdxueym0lhrihasheiwekgoo2582nw90mmg3x4snc2jmmcor82zj2qwdodhx10fprq1juvig86wxm0h39kf5u5jn03k2jqee7k6obxvmf07hv4ftwsfi40eft9z3xh86gikwtmuacj1puol94g21f2vri8iev3l2xtaysq6m0om4h1jou38m5n7u8mbeif53xmhnhezvyaim3m3vc0kjg64vi08y9hip2grumy9m2nysj737ax7miukyr0qa6y759w4kxaccnj2exgh94qpjci3l3ef1bz5d9by682xxsh6bc5eevqzf0i5fk85pax4ims53ts3j7fqpma4rwqhz0sj4ma7t315730b4e9dnwzeggp7x3dkjdsvtvkagg0cgn24k9d9kgkgn8f2srlqhnvihg9tx9bmj6kl5z0bcqeh9qrk9a1kag4jolrdmu5r2vxem9t07v7nd8rbfqj8737qn3bw25s2dnpd2nyzoum6kw8exxsjbjwitwglh5ds1uj3nbjnig0044ux9g78up44i7d7e8mj1t9g4rnfg1cs7n0mofwdo9nab3tu37f20kgfpuqylg0sgz5f9510ohn889eysl9ef4sdqtyop1d6rl7vobvjflvr089urvgsczn4gzhh4x0muxzynpw860rshwdf2ar4t2pzgkwl9myti99kztkuucpjh8rsd7slms645egoyol081h1jkfzffy3as51i6qmt9gd2ignkuo9w8nz44fbali3f8aczr05yfs8hin6brezqvi0f0qunmqcr76o8q3hmp2q2ykseenqz1pxrrw0phkbftzefag6vxxvnrdzrm81q20l0rq9e0iq418imfb9ew3hv8jxfnoihan2o5qr2978t0cikwmecuzqycq6yyrjsk9ywk1qbr5qujvkbyykozv61x39vhny07d0htj035s02ubvz19fh15h6oi5hed6e6ev2b7fm3g7lytze7qe2xo09fzf27yrwtt23xwljec3gfangcqezc5cmo50sfrcusmgtyt1ekqjefzxcnbbb1fvcrdjkuqc2eovg2wl2rgoi9okmgflwp8jdjjuerrls2hfbvg48jyni3kbfuow7xeu492hgb9f9cvi1axazquz6lcpr7m6qts5nrnppk5qcflnkhro6rdv45k5y3znyvyaks935dc3ypf1ny5z70xfb5qex2yce5oky566c85oy3zhmeky0tnyp0k575vhvn6rqt33h15h3779ark1rafvv6rvfr6gqia0gwxrul4o2ep5b2v2q1onivbopnfml1x3zyhsyhrjplpu72rn4ilqi6vzsejdzx3am2ksz2bnl5j2ezmlcmqqnzeb4p7jxyiy7ur739z1d6uh72i1dqk5d0xduc5ep8oqqw9lg5de44xt1ryezo0qkjyt7nucnhjjvyve7bbxqzw12q7kv5czjqrma1tq5qjawnmey4x1x3cialteft06d4q2whv7zrx0jov33ghao5n87gai4lhpuakwol6orr1431mxwjbjd1qdpjk4uvcna2baviwa5pjtcslm0o6ybey2qv1ay5yvvcq6ygfsviw1sgtlw2ei1uzewslwtqyy0qttqhgsepdzldduejw1n7esgt0uc3abeuypipuesa0h6ix9m5c2a4q2ljhtvqsgxleoruxcs4789sszdk5tm1nukl2e118fgdnfhrsh1qa23vedfkmizu8tlcnvn3l5u6a6fpxsc7ibw1j2q7588tvklu19dd18qzc3gfh3oof9e20i2vuglfzhhl0mfmghhd8x5rb9gevfctm20zzlj2vc8sny457ee95cawb0p0xcl2zmxlwjahz8wdt6p4hpq43mma1b23k3x9lpkxiy61ytwzsratoiq89xa70549s1ilqdzofvvxdl233hpvfhk7jdzabw5kfmp9bj2k8n10umc1qb8gfveicrkdvrbaclymhsfrfi46yfvof883yl8ohwfkewczmebhb7x9n1s6slsfbodoum3l10j21jfp0ojomj62ip2wk5roimkrksov9u6qrjsz9xom4y6r749b437yke0a68pz4q5qduq99qww1nwc7kjxgrnvz8u40c4djvzvhxilsyc56b8fhxycfe51ix24a0po0mjn9dqe56x1p79k5nczdr7mswp5nusu199trnritpyerhdeo1g0eooyg9tasrcbxsxhsd40rknxic0y80nmagcxlqzyf7vzoybu12yuqsh4d6koorlzt69j63jln626962akomkxusn6xbdybll38zsju3su8caa3uw6ubdtk9owg8mzt921rbsvgkyjzgwof82jyfk75flqrfp65ruwlypoh0hy35rim35pqoopg3vml7dalbntynbhoghhfg60n0j6fvd2iehrreidtk7byete8po4bjsdwglgz4cwrywso6i3m6p7kvso5k6g5bhhz9rho2t1bjunumjfgagcgybkp4ne9yu2xu5f98vcv345wmige65dx4wp3jus2jotdkjkh8zo4w254a67f0kcaq4kslpa2dqol9x82n9c2p4wvs7rq97urwzyq2dqcoklure8ndwyn24nsmeo66e1ku9cam1z5m1so8g5d67yn33w6sjtcspneivkhssk1oh8v2ny0g1zyykqjsm49ucjqqryxoyb2y7oh8zqwjhfxhv1i7mwhsxx5i5sxqcibasdr81idgkgsmpapljik1dyhn2k167rza43vcqswrmsyfeoke3df9tkkwwq9n417u56bm39b7vx1hiukslcz8ol6lu92d5o1ndgcmutxcmxh7wuqx3x5elb0axdlcil6tdckjud7j298dr02po6a7w5hm3hpt8dxsxpgxjnjcumub4r2va48c5zh7hmyg0ueospg16udkd0bqpb1awfp29k4dquonrw1e06c9teyx8q8o0e6plz6gvmll815itzsw1wh4f0is92aimdijvs43v5kffyvaeoxxh7wdfss2o83lzx8mh7alztrrfb5ht3ct0gadm2d32yfxa3umbthnla3sflodmcp73wnp5xbmzy53odnuwej7zrmnaag7kv959v1bbpyai5bnr7bd3di6xx0w7jkmgn5dahz3yxxu4sjszan41nss3638b0lnxf4uzvl4kvgznj1q3cx8pl257uvrf3t9y1muvy0oh97c8u10dl8gb1lb2r3erwwomujnito3cd0hq31vc0pxh5hlod47w05aaqilh2u8ciww4e7dg5d3ie7mak7u6fhn61ug6uj5sj98hpuv4ueav5d3h8wps3unb6y96doodz01nwmeejwrdtt4ahs55mu92yqsusmhha0f1piuztmz6ecnywrycc1kd2fj6z9g6rjcmorno3mux1433oiltlmijxdfhpakxhxjqfju07hnwkfl2uoa5so1atac30jwted7mr1eemzwsc5bqutdelj04cwmqqcfa2cx3b9xd0jabhx8f1tmcf86i5u1fhpulnz17o23fnip3ykuh8ja4ajud54tdr2pb5bke6yre6kl7hmyjtc6ca0tg2vjkrohpjbk3425eea1q09g6komslhvxwx4no0kajk9pow4kf5xvyp6o6jihqnmeq330bgji3f14eq7qz1f78l0ca1wrkyn7rj4qqsq7wpmd7tl0eoq3z4sswx8ivese3scys1ex74ayvn1p5aljzbwci8hjadfk8ljss0qv8qmaf9pxk4ba03gtzf1x8akz7ud23a3p9gbih4xdcvn8tj29uxdx2pqblcjsirt0aiff2vqqu7thcs4tazvavcaetc52z1pybw0grq6ci3ihtazgbos4wxmis86chdoi9zw4shk2rs4357tmk1t8iw9vc4rix671vl7j6muh6pngtvs4ojgheyzglvwic8itbsk72og1sgw2ms7n87dvurnaghznshtryyj8yqcwmnogofupqhwf6t6ccbehby69hewhrmg9ldan86bolffixtermiip30ixpe == \8\7\l\8\v\g\6\6\e\j\0\8\1\k\o\g\x\2\z\7\p\6\3\x\z\r\2\o\7\n\j\j\6\r\d\x\u\e\y\m\0\l\h\r\i\h\a\s\h\e\i\w\e\k\g\o\o\2\5\8\2\n\w\9\0\m\m\g\3\x\4\s\n\c\2\j\m\m\c\o\r\8\2\z\j\2\q\w\d\o\d\h\x\1\0\f\p\r\q\1\j\u\v\i\g\8\6\w\x\m\0\h\3\9\k\f\5\u\5\j\n\0\3\k\2\j\q\e\e\7\k\6\o\b\x\v\m\f\0\7\h\v\4\f\t\w\s\f\i\4\0\e\f\t\9\z\3\x\h\8\6\g\i\k\w\t\m\u\a\c\j\1\p\u\o\l\9\4\g\2\1\f\2\v\r\i\8\i\e\v\3\l\2\x\t\a\y\s\q\6\m\0\o\m\4\h\1\j\o\u\3\8\m\5\n\7\u\8\m\b\e\i\f\5\3\x\m\h\n\h\e\z\v\y\a\i\m\3\m\3\v\c\0\k\j\g\6\4\v\i\0\8\y\9\h\i\p\2\g\r\u\m\y\9\m\2\n\y\s\j\7\3\7\a\x\7\m\i\u\k\y\r\0\q\a\6\y\7\5\9\w\4\k\x\a\c\c\n\j\2\e\x\g\h\9\4\q\p\j\c\i\3\l\3\e\f\1\b\z\5\d\9\b\y\6\8\2\x\x\s\h\6\b\c\5\e\e\v\q\z\f\0\i\5\f\k\8\5\p\a\x\4\i\m\s\5\3\t\s\3\j\7\f\q\p\m\a\4\r\w\q\h\z\0\s\j\4\m\a\7\t\3\1\5\7\3\0\b\4\e\9\d\n\w\z\e\g\g\p\7\x\3\d\k\j\d\s\v\t\v\k\a\g\g\0\c\g\n\2\4\k\9\d\9\k\g\k\g\n\8\f\2\s\r\l\q\h\n\v\i\h\g\9\t\x\9\b\m\j\6\k\l\5\z\0\b\c\q\e\h\9\q\r\k\9\a\1\k\a\g\4\j\o\l\r\d\m\u\5\r\2\v\x\e\m\9\t\0\7\v\7\n\d\8\r\b\f\q\j\8\7\3\7\q\n\3\b\w\2\5\s\2\d\n\p\d\2\n\y\z\o\u\m\6\k\w\8\e\x\x\s\j\b\j\w\i\t\w\g\l\h\5\d\s\1\u\j\3\n\b\j\n\i\g\0\0\4\4\u\x\9\g\7\8\u\p\4\4\i\7\d\7\e\8\m\j\1\t\9\g\4\r\n\f\g\1\c\s\7\n\0\m\o\f\w\d\o\9\n\a\b\3\t\u\3\7\f\2\0\k\g\f\p\u\q\y\l\g\0\s\g\z\5\f\9\5\1\0\o\h\n\8\8\9\e\y\s\l\9\e\f\4\s\d\q\t\y\o\p\1\d\6\r\l\7\v\o\b\v\j\f\l\v\r\0\8\9\u\r\v\g\s\c\z\n\4\g\z\h\h\4\x\0\m\u\x\z\y\n\p\w\8\6\0\r\s\h\w\d\f\2\a\r\4\t\2\p\z\g\k\w\l\9\m\y\t\i\9\9\k\z\t\k\u\u\c\p\j\h\8\r\s\d\7\s\l\m\s\6\4\5\e\g\o\y\o\l\0\8\1\h\1\j\k\f\z\f\f\y\3\a\s\5\1\i\6\q\m\t\9\g\d\2\i\g\n\k\u\o\9\w\8\n\z\4\4\f\b\a\l\i\3\f\8\a\c\z\r\0\5\y\f\s\8\h\i\n\6\b\r\e\z\q\v\i\0\f\0\q\u\n\m\q\c\r\7\6\o\8\q\3\h\m\p\2\q\2\y\k\s\e\e\n\q\z\1\p\x\r\r\w\0\p\h\k\b\f\t\z\e\f\a\g\6\v\x\x\v\n\r\d\z\r\m\8\1\q\2\0\l\0\r\q\9\e\0\i\q\4\1\8\i\m\f\b\9\e\w\3\h\v\8\j\x\f\n\o\i\h\a\n\2\o\5\q\r\2\9\7\8\t\0\c\i\k\w\m\e\c\u\z\q\y\c\q\6\y\y\r\j\s\k\9\y\w\k\1\q\b\r\5\q\u\j\v\k\b\y\y\k\o\z\v\6\1\x\3\9\v\h\n\y\0\7\d\0\h\t\j\0\3\5\s\0\2\u\b\v\z\1\9\f\h\1\5\h\6\o\i\5\h\e\d\6\e\6\e\v\2\b\7\f\m\3\g\7\l\y\t\z\e\7\q\e\2\x\o\0\9\f\z\f\2\7\y\r\w\t\t\2\3\x\w\l\j\e\c\3\g\f\a\n\g\c\q\e\z\c\5\c\m\o\5\0\s\f\r\c\u\s\m\g\t\y\t\1\e\k\q\j\e\f\z\x\c\n\b\b\b\1\f\v\c\r\d\j\k\u\q\c\2\e\o\v\g\2\w\l\2\r\g\o\i\9\o\k\m\g\f\l\w\p\8\j\d\j\j\u\e\r\r\l\s\2\h\f\b\v\g\4\8\j\y\n\i\3\k\b\f\u\o\w\7\x\e\u\4\9\2\h\g\b\9\f\9\c\v\i\1\a\x\a\z\q\u\z\6\l\c\p\r\7\m\6\q\t\s\5\n\r\n\p\p\k\5\q\c\f\l\n\k\h\r\o\6\r\d\v\4\5\k\5\y\3\z\n\y\v\y\a\k\s\9\3\5\d\c\3\y\p\f\1\n\y\5\z\7\0\x\f\b\5\q\e\x\2\y\c\e\5\o\k\y\5\6\6\c\8\5\o\y\3\z\h\m\e\k\y\0\t\n\y\p\0\k\5\7\5\v\h\v\n\6\r\q\t\3\3\h\1\5\h\3\7\7\9\a\r\k\1\r\a\f\v\v\6\r\v\f\r\6\g\q\i\a\0\g\w\x\r\u\l\4\o\2\e\p\5\b\2\v\2\q\1\o\n\i\v\b\o\p\n\f\m\l\1\x\3\z\y\h\s\y\h\r\j\p\l\p\u\7\2\r\n\4\i\l\q\i\6\v\z\s\e\j\d\z\x\3\a\m\2\k\s\z\2\b\n\l\5\j\2\e\z\m\l\c\m\q\q\n\z\e\b\4\p\7\j\x\y\i\y\7\u\r\7\3\9\z\1\d\6\u\h\7\2\i\1\d\q\k\5\d\0\x\d\u\c\5\e\p\8\o\q\q\w\9\l\g\5\d\e\4\4\x\t\1\r\y\e\z\o\0\q\k\j\y\t\7\n\u\c\n\h\j\j\v\y\v\e\7\b\b\x\q\z\w\1\2\q\7\k\v\5\c\z\j\q\r\m\a\1\t\q\5\q\j\a\w\n\m\e\y\4\x\1\x\3\c\i\a\l\t\e\f\t\0\6\d\4\q\2\w\h\v\7\z\r\x\0\j\o\v\3\3\g\h\a\o\5\n\8\7\g\a\i\4\l\h\p\u\a\k\w\o\l\6\o\r\r\1\4\3\1\m\x\w\j\b\j\d\1\q\d\p\j\k\4\u\v\c\n\a\2\b\a\v\i\w\a\5\p\j\t\c\s\l\m\0\o\6\y\b\e\y\2\q\v\1\a\y\5\y\v\v\c\q\6\y\g\f\s\v\i\w\1\s\g\t\l\w\2\e\i\1\u\z\e\w\s\l\w\t\q\y\y\0\q\t\t\q\h\g\s\e\p\d\z\l\d\d\u\e\j\w\1\n\7\e\s\g\t\0\u\c\3\a\b\e\u\y\p\i\p\u\e\s\a\0\h\6\i\x\9\m\5\c\2\a\4\q\2\l\j\h\t\v\q\s\g\x\l\e\o\r\u\x\c\s\4\7\8\9\s\s\z\d\k\5\t\m\1\n\u\k\l\2\e\1\1\8\f\g\d\n\f\h\r\s\h\1\q\a\2\3\v\e\d\f\k\m\i\z\u\8\t\l\c\n\v\n\3\l\5\u\6\a\6\f\p\x\s\c\7\i\b\w\1\j\2\q\7\5\8\8\t\v\k\l\u\1\9\d\d\1\8\q\z\c\3\g\f\h\3\o\o\f\9\e\2\0\i\2\v\u\g\l\f\z\h\h\l\0\m\f\m\g\h\h\d\8\x\5\r\b\9\g\e\v\f\c\t\m\2\0\z\z\l\j\2\v\c\8\s\n\y\4\5\7\e\e\9\5\c\a\w\b\0\p\0\x\c\l\2\z\m\x\l\w\j\a\h\z\8\w\d\t\6\p\4\h\p\q\4\3\m\m\a\1\b\2\3\k\3\x\9\l\p\k\x\i\y\6\1\y\t\w\z\s\r\a\t\o\i\q\8\9\x\a\7\0\5\4\9\s\1\i\l\q\d\z\o\f\v\v\x\d\l\2\3\3\h\p\v\f\h\k\7\j\d\z\a\b\w\5\k\f\m\p\9\b\j\2\k\8\n\1\0\u\m\c\1\q\b\8\g\f\v\e\i\c\r\k\d\v\r\b\a\c\l\y\m\h\s\f\r\f\i\4\6\y\f\v\o\f\8\8\3\y\l\8\o\h\w\f\k\e\w\c\z\m\e\b\h\b\7\x\9\n\1\s\6\s\l\s\f\b\o\d\o\u\m\3\l\1\0\j\2\1\j\f\p\0\o\j\o\m\j\6\2\i\p\2\w\k\5\r\o\i\m\k\r\k\s\o\v\9\u\6\q\r\j\s\z\9\x\o\m\4\y\6\r\7\4\9\b\4\3\7\y\k\e\0\a\6\8\p\z\4\q\5\q\d\u\q\9\9\q\w\w\1\n\w\c\7\k\j\x\g\r\n\v\z\8\u\4\0\c\4\d\j\v\z\v\h\x\i\l\s\y\c\5\6\b\8\f\h\x\y\c\f\e\5\1\i\x\2\4\a\0\p\o\0\m\j\n\9\d\q\e\5\6\x\1\p\7\9\k\5\n\c\z\d\r\7\m\s\w\p\5\n\u\s\u\1\9\9\t\r\n\r\i\t\p\y\e\r\h\d\e\o\1\g\0\e\o\o\y\g\9\t\a\s\r\c\b\x\s\x\h\s\d\4\0\r\k\n\x\i\c\0\y\8\0\n\m\a\g\c\x\l\q\z\y\f\7\v\z\o\y\b\u\1\2\y\u\q\s\h\4\d\6\k\o\o\r\l\z\t\6\9\j\6\3\j\l\n\6\2\6\9\6\2\a\k\o\m\k\x\u\s\n\6\x\b\d\y\b\l\l\3\8\z\s\j\u\3\s\u\8\c\a\a\3\u\w\6\u\b\d\t\k\9\o\w\g\8\m\z\t\9\2\1\r\b\s\v\g\k\y\j\z\g\w\o\f\8\2\j\y\f\k\7\5\f\l\q\r\f\p\6\5\r\u\w\l\y\p\o\h\0\h\y\3\5\r\i\m\3\5\p\q\o\o\p\g\3\v\m\l\7\d\a\l\b\n\t\y\n\b\h\o\g\h\h\f\g\6\0\n\0\j\6\f\v\d\2\i\e\h\r\r\e\i\d\t\k\7\b\y\e\t\e\8\p\o\4\b\j\s\d\w\g\l\g\z\4\c\w\r\y\w\s\o\6\i\3\m\6\p\7\k\v\s\o\5\k\6\g\5\b\h\h\z\9\r\h\o\2\t\1\b\j\u\n\u\m\j\f\g\a\g\c\g\y\b\k\p\4\n\e\9\y\u\2\x\u\5\f\9\8\v\c\v\3\4\5\w\m\i\g\e\6\5\d\x\4\w\p\3\j\u\s\2\j\o\t\d\k\j\k\h\8\z\o\4\w\2\5\4\a\6\7\f\0\k\c\a\q\4\k\s\l\p\a\2\d\q\o\l\9\x\8\2\n\9\c\2\p\4\w\v\s\7\r\q\9\7\u\r\w\z\y\q\2\d\q\c\o\k\l\u\r\e\8\n\d\w\y\n\2\4\n\s\m\e\o\6\6\e\1\k\u\9\c\a\m\1\z\5\m\1\s\o\8\g\5\d\6\7\y\n\3\3\w\6\s\j\t\c\s\p\n\e\i\v\k\h\s\s\k\1\o\h\8\v\2\n\y\0\g\1\z\y\y\k\q\j\s\m\4\9\u\c\j\q\q\r\y\x\o\y\b\2\y\7\o\h\8\z\q\w\j\h\f\x\h\v\1\i\7\m\w\h\s\x\x\5\i\5\s\x\q\c\i\b\a\s\d\r\8\1\i\d\g\k\g\s\m\p\a\p\l\j\i\k\1\d\y\h\n\2\k\1\6\7\r\z\a\4\3\v\c\q\s\w\r\m\s\y\f\e\o\k\e\3\d\f\9\t\k\k\w\w\q\9\n\4\1\7\u\5\6\b\m\3\9\b\7\v\x\1\h\i\u\k\s\l\c\z\8\o\l\6\l\u\9\2\d\5\o\1\n\d\g\c\m\u\t\x\c\m\x\h\7\w\u\q\x\3\x\5\e\l\b\0\a\x\d\l\c\i\l\6\t\d\c\k\j\u\d\7\j\2\9\8\d\r\0\2\p\o\6\a\7\w\5\h\m\3\h\p\t\8\d\x\s\x\p\g\x\j\n\j\c\u\m\u\b\4\r\2\v\a\4\8\c\5\z\h\7\h\m\y\g\0\u\e\o\s\p\g\1\6\u\d\k\d\0\b\q\p\b\1\a\w\f\p\2\9\k\4\d\q\u\o\n\r\w\1\e\0\6\c\9\t\e\y\x\8\q\8\o\0\e\6\p\l\z\6\g\v\m\l\l\8\1\5\i\t\z\s\w\1\w\h\4\f\0\i\s\9\2\a\i\m\d\i\j\v\s\4\3\v\5\k\f\f\y\v\a\e\o\x\x\h\7\w\d\f\s\s\2\o\8\3\l\z\x\8\m\h\7\a\l\z\t\r\r\f\b\5\h\t\3\c\t\0\g\a\d\m\2\d\3\2\y\f\x\a\3\u\m\b\t\h\n\l\a\3\s\f\l\o\d\m\c\p\7\3\w\n\p\5\x\b\m\z\y\5\3\o\d\n\u\w\e\j\7\z\r\m\n\a\a\g\7\k\v\9\5\9\v\1\b\b\p\y\a\i\5\b\n\r\7\b\d\3\d\i\6\x\x\0\w\7\j\k\m\g\n\5\d\a\h\z\3\y\x\x\u\4\s\j\s\z\a\n\4\1\n\s\s\3\6\3\8\b\0\l\n\x\f\4\u\z\v\l\4\k\v\g\z\n\j\1\q\3\c\x\8\p\l\2\5\7\u\v\r\f\3\t\9\y\1\m\u\v\y\0\o\h\9\7\c\8\u\1\0\d\l\8\g\b\1\l\b\2\r\3\e\r\w\w\o\m\u\j\n\i\t\o\3\c\d\0\h\q\3\1\v\c\0\p\x\h\5\h\l\o\d\4\7\w\0\5\a\a\q\i\l\h\2\u\8\c\i\w\w\4\e\7\d\g\5\d\3\i\e\7\m\a\k\7\u\6\f\h\n\6\1\u\g\6\u\j\5\s\j\9\8\h\p\u\v\4\u\e\a\v\5\d\3\h\8\w\p\s\3\u\n\b\6\y\9\6\d\o\o\d\z\0\1\n\w\m\e\e\j\w\r\d\t\t\4\a\h\s\5\5\m\u\9\2\y\q\s\u\s\m\h\h\a\0\f\1\p\i\u\z\t\m\z\6\e\c\n\y\w\r\y\c\c\1\k\d\2\f\j\6\z\9\g\6\r\j\c\m\o\r\n\o\3\m\u\x\1\4\3\3\o\i\l\t\l\m\i\j\x\d\f\h\p\a\k\x\h\x\j\q\f\j\u\0\7\h\n\w\k\f\l\2\u\o\a\5\s\o\1\a\t\a\c\3\0\j\w\t\e\d\7\m\r\1\e\e\m\z\w\s\c\5\b\q\u\t\d\e\l\j\0\4\c\w\m\q\q\c\f\a\2\c\x\3\b\9\x\d\0\j\a\b\h\x\8\f\1\t\m\c\f\8\6\i\5\u\1\f\h\p\u\l\n\z\1\7\o\2\3\f\n\i\p\3\y\k\u\h\8\j\a\4\a\j\u\d\5\4\t\d\r\2\p\b\5\b\k\e\6\y\r\e\6\k\l\7\h\m\y\j\t\c\6\c\a\0\t\g\2\v\j\k\r\o\h\p\j\b\k\3\4\2\5\e\e\a\1\q\0\9\g\6\k\o\m\s\l\h\v\x\w\x\4\n\o\0\k\a\j\k\9\p\o\w\4\k\f\5\x\v\y\p\6\o\6\j\i\h\q\n\m\e\q\3\3\0\b\g\j\i\3\f\1\4\e\q\7\q\z\1\f\7\8\l\0\c\a\1\w\r\k\y\n\7\r\j\4\q\q\s\q\7\w\p\m\d\7\t\l\0\e\o\q\3\z\4\s\s\w\x\8\i\v\e\s\e\3\s\c\y\s\1\e\x\7\4\a\y\v\n\1\p\5\a\l\j\z\b\w\c\i\8\h\j\a\d\f\k\8\l\j\s\s\0\q\v\8\q\m\a\f\9\p\x\k\4\b\a\0\3\g\t\z\f\1\x\8\a\k\z\7\u\d\2\3\a\3\p\9\g\b\i\h\4\x\d\c\v\n\8\t\j\2\9\u\x\d\x\2\p\q\b\l\c\j\s\i\r\t\0\a\i\f\f\2\v\q\q\u\7\t\h\c\s\4\t\a\z\v\a\v\c\a\e\t\c\5\2\z\1\p\y\b\w\0\g\r\q\6\c\i\3\i\h\t\a\z\g\b\o\s\4\w\x\m\i\s\8\6\c\h\d\o\i\9\z\w\4\s\h\k\2\r\s\4\3\5\7\t\m\k\1\t\8\i\w\9\v\c\4\r\i\x\6\7\1\v\l\7\j\6\m\u\h\6\p\n\g\t\v\s\4\o\j\g\h\e\y\z\g\l\v\w\i\c\8\i\t\b\s\k\7\2\o\g\1\s\g\w\2\m\s\7\n\8\7\d\v\u\r\n\a\g\h\z\n\s\h\t\r\y\y\j\8\y\q\c\w\m\n\o\g\o\f\u\p\q\h\w\f\6\t\6\c\c\b\e\h\b\y\6\9\h\e\w\h\r\m\g\9\l\d\a\n\8\6\b\o\l\f\f\i\x\t\e\r\m\i\i\p\3\0\i\x\p\e ]] 00:06:42.942 00:06:42.942 real 0m0.953s 00:06:42.942 user 0m0.677s 00:06:42.942 sys 0m0.374s 00:06:42.942 01:47:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.942 01:47:53 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:42.942 END TEST dd_rw_offset 00:06:42.942 ************************************ 00:06:42.942 01:47:53 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:06:42.942 01:47:53 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:42.942 01:47:53 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:42.942 01:47:53 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:42.942 01:47:53 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:06:42.942 01:47:53 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:42.942 01:47:53 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:06:42.942 01:47:53 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:42.942 01:47:53 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:06:42.942 01:47:53 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:42.942 01:47:53 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:42.942 [2024-11-19 01:47:53.539570] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:42.942 [2024-11-19 01:47:53.539664] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72126 ] 00:06:43.239 { 00:06:43.239 "subsystems": [ 00:06:43.239 { 00:06:43.239 "subsystem": "bdev", 00:06:43.239 "config": [ 00:06:43.239 { 00:06:43.239 "params": { 00:06:43.239 "trtype": "pcie", 00:06:43.239 "traddr": "0000:00:10.0", 00:06:43.239 "name": "Nvme0" 00:06:43.239 }, 00:06:43.239 "method": "bdev_nvme_attach_controller" 00:06:43.239 }, 00:06:43.239 { 00:06:43.239 "method": "bdev_wait_for_examine" 00:06:43.239 } 00:06:43.239 ] 00:06:43.239 } 00:06:43.239 ] 00:06:43.239 } 00:06:43.239 [2024-11-19 01:47:53.676925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.239 [2024-11-19 01:47:53.696225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.239 [2024-11-19 01:47:53.723152] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:43.239  [2024-11-19T01:47:54.125Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:43.510 00:06:43.510 01:47:53 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:43.510 00:06:43.510 real 0m13.468s 00:06:43.510 user 0m9.740s 00:06:43.510 sys 0m4.327s 00:06:43.510 ************************************ 00:06:43.510 END TEST spdk_dd_basic_rw 00:06:43.510 ************************************ 00:06:43.510 01:47:53 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.510 01:47:53 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:43.510 01:47:53 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:43.510 01:47:53 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.510 01:47:53 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.510 01:47:53 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:43.510 ************************************ 00:06:43.510 START TEST spdk_dd_posix 00:06:43.510 ************************************ 00:06:43.510 01:47:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:43.510 * Looking for test storage... 00:06:43.510 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:43.510 01:47:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:43.510 01:47:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lcov --version 00:06:43.510 01:47:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:43.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.793 --rc genhtml_branch_coverage=1 00:06:43.793 --rc genhtml_function_coverage=1 00:06:43.793 --rc genhtml_legend=1 00:06:43.793 --rc geninfo_all_blocks=1 00:06:43.793 --rc geninfo_unexecuted_blocks=1 00:06:43.793 00:06:43.793 ' 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:43.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.793 --rc genhtml_branch_coverage=1 00:06:43.793 --rc genhtml_function_coverage=1 00:06:43.793 --rc genhtml_legend=1 00:06:43.793 --rc geninfo_all_blocks=1 00:06:43.793 --rc geninfo_unexecuted_blocks=1 00:06:43.793 00:06:43.793 ' 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:43.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.793 --rc genhtml_branch_coverage=1 00:06:43.793 --rc genhtml_function_coverage=1 00:06:43.793 --rc genhtml_legend=1 00:06:43.793 --rc geninfo_all_blocks=1 00:06:43.793 --rc geninfo_unexecuted_blocks=1 00:06:43.793 00:06:43.793 ' 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:43.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.793 --rc genhtml_branch_coverage=1 00:06:43.793 --rc genhtml_function_coverage=1 00:06:43.793 --rc genhtml_legend=1 00:06:43.793 --rc geninfo_all_blocks=1 00:06:43.793 --rc geninfo_unexecuted_blocks=1 00:06:43.793 00:06:43.793 ' 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.793 01:47:54 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.794 01:47:54 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.794 01:47:54 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.794 01:47:54 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.794 01:47:54 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:43.794 01:47:54 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.794 01:47:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:43.794 01:47:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:43.794 01:47:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:43.794 01:47:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:43.794 01:47:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:43.794 01:47:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:43.794 01:47:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:43.794 01:47:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:43.794 * First test run, liburing in use 00:06:43.794 01:47:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:43.794 01:47:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.794 01:47:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.794 01:47:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:43.794 ************************************ 00:06:43.794 START TEST dd_flag_append 00:06:43.794 ************************************ 00:06:43.794 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:06:43.794 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:43.794 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:43.794 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:43.794 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:43.794 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:43.794 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=t35hebds2ip55orq97o317pvv0gj8zqy 00:06:43.794 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:43.794 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:43.794 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:43.794 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=otu5z2w47otg0e3jrsgqzqffct1lveyf 00:06:43.794 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s t35hebds2ip55orq97o317pvv0gj8zqy 00:06:43.794 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s otu5z2w47otg0e3jrsgqzqffct1lveyf 00:06:43.794 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:43.794 [2024-11-19 01:47:54.220283] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:43.794 [2024-11-19 01:47:54.220391] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72198 ] 00:06:43.794 [2024-11-19 01:47:54.365365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.794 [2024-11-19 01:47:54.383002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.053 [2024-11-19 01:47:54.410405] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:44.053  [2024-11-19T01:47:54.668Z] Copying: 32/32 [B] (average 31 kBps) 00:06:44.053 00:06:44.053 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ otu5z2w47otg0e3jrsgqzqffct1lveyft35hebds2ip55orq97o317pvv0gj8zqy == \o\t\u\5\z\2\w\4\7\o\t\g\0\e\3\j\r\s\g\q\z\q\f\f\c\t\1\l\v\e\y\f\t\3\5\h\e\b\d\s\2\i\p\5\5\o\r\q\9\7\o\3\1\7\p\v\v\0\g\j\8\z\q\y ]] 00:06:44.053 00:06:44.053 real 0m0.366s 00:06:44.053 user 0m0.168s 00:06:44.053 sys 0m0.158s 00:06:44.053 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.053 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:44.053 ************************************ 00:06:44.053 END TEST dd_flag_append 00:06:44.053 ************************************ 00:06:44.053 01:47:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:44.053 01:47:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.053 01:47:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.053 01:47:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:44.053 ************************************ 00:06:44.053 START TEST dd_flag_directory 00:06:44.053 ************************************ 00:06:44.053 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:06:44.053 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:44.053 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:06:44.053 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:44.053 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.053 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.053 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.053 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.053 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.053 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.053 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.053 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:44.053 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:44.053 [2024-11-19 01:47:54.636547] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:44.053 [2024-11-19 01:47:54.636649] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72221 ] 00:06:44.311 [2024-11-19 01:47:54.781241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.311 [2024-11-19 01:47:54.800590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.311 [2024-11-19 01:47:54.829923] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:44.311 [2024-11-19 01:47:54.844374] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:44.311 [2024-11-19 01:47:54.844438] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:44.311 [2024-11-19 01:47:54.844470] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:44.311 [2024-11-19 01:47:54.901203] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:44.570 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:06:44.570 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:44.570 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:06:44.570 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:06:44.570 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:06:44.570 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:44.570 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:44.570 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:06:44.570 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:44.570 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.570 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.570 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.570 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.570 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.570 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.570 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.570 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:44.570 01:47:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:44.570 [2024-11-19 01:47:55.011118] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:44.570 [2024-11-19 01:47:55.011254] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72230 ] 00:06:44.570 [2024-11-19 01:47:55.162542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.570 [2024-11-19 01:47:55.180294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.829 [2024-11-19 01:47:55.207642] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:44.829 [2024-11-19 01:47:55.221843] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:44.829 [2024-11-19 01:47:55.221909] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:44.829 [2024-11-19 01:47:55.221942] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:44.829 [2024-11-19 01:47:55.276197] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:44.829 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:06:44.829 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:44.829 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:06:44.829 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:06:44.829 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:06:44.829 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:44.829 00:06:44.829 real 0m0.740s 00:06:44.829 user 0m0.333s 00:06:44.829 sys 0m0.199s 00:06:44.829 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.829 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:44.829 ************************************ 00:06:44.829 END TEST dd_flag_directory 00:06:44.829 ************************************ 00:06:44.829 01:47:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:44.829 01:47:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.829 01:47:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.829 01:47:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:44.829 ************************************ 00:06:44.829 START TEST dd_flag_nofollow 00:06:44.829 ************************************ 00:06:44.829 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:06:44.829 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:44.829 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:44.829 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:44.829 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:44.829 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:44.829 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:06:44.829 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:44.829 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.829 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.829 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.829 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.829 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.829 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.829 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.829 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:44.829 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:44.829 [2024-11-19 01:47:55.429815] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:44.829 [2024-11-19 01:47:55.429932] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72259 ] 00:06:45.087 [2024-11-19 01:47:55.574353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.087 [2024-11-19 01:47:55.593040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.087 [2024-11-19 01:47:55.619388] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.087 [2024-11-19 01:47:55.633362] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:45.087 [2024-11-19 01:47:55.633470] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:45.087 [2024-11-19 01:47:55.633506] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:45.087 [2024-11-19 01:47:55.687753] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:45.346 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:06:45.346 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:45.346 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:06:45.346 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:06:45.346 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:06:45.346 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:45.346 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:45.346 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:06:45.346 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:45.346 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.346 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.346 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.346 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.346 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.346 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.346 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.346 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:45.346 01:47:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:45.346 [2024-11-19 01:47:55.787176] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:45.346 [2024-11-19 01:47:55.787278] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72267 ] 00:06:45.346 [2024-11-19 01:47:55.934292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.346 [2024-11-19 01:47:55.952287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.604 [2024-11-19 01:47:55.979399] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.604 [2024-11-19 01:47:55.994040] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:45.604 [2024-11-19 01:47:55.994120] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:45.604 [2024-11-19 01:47:55.994152] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:45.604 [2024-11-19 01:47:56.054191] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:45.604 01:47:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:06:45.604 01:47:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:45.604 01:47:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:06:45.604 01:47:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:06:45.604 01:47:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:06:45.604 01:47:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:45.604 01:47:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:45.604 01:47:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:45.604 01:47:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:45.604 01:47:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:45.604 [2024-11-19 01:47:56.159770] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:45.604 [2024-11-19 01:47:56.159876] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72276 ] 00:06:45.863 [2024-11-19 01:47:56.304114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.863 [2024-11-19 01:47:56.321885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.863 [2024-11-19 01:47:56.347486] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.863  [2024-11-19T01:47:56.478Z] Copying: 512/512 [B] (average 500 kBps) 00:06:45.863 00:06:45.863 01:47:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ oqygxv3935laodlraxcr5cydhe10jv0gqj2b5fel5lx50ks35zf1657rz673g5awpozcx2em8sdshjn65tgqvu08et07ho8o6zhjujhx0d0l5iecg2ocd8lo506tk66pu8a7fnbtzlwyare6w754xhk5u7xqxocru5bdp918z41hv7tfgifpkvgn74jyqot7ytwysr6eb4a96jsqlag6ocsbl73dw5gtozi511sivh864w6tht96c2gg6hl5nyq39iwdhe3kxmowddogb3e7kf3f173cv4cvv2adlasd8t9o3idc894o8hjm5c3bmpx6ckizfka4lcssy3emgnqxhrtwsmpaj3j1qym1kcx573van37428iqbkoiyq6qqx9i5v98kzq16pu7fo3e3w0ynvu4tcu8s6fi8qfw43dbio0ps0eaj17x6jgulk8no1vs5dpk9o3tjsygt4eaur92r8tp0dm9ej6o0aqcbp95lmnhssm505b2gwau7ofejzeb == \o\q\y\g\x\v\3\9\3\5\l\a\o\d\l\r\a\x\c\r\5\c\y\d\h\e\1\0\j\v\0\g\q\j\2\b\5\f\e\l\5\l\x\5\0\k\s\3\5\z\f\1\6\5\7\r\z\6\7\3\g\5\a\w\p\o\z\c\x\2\e\m\8\s\d\s\h\j\n\6\5\t\g\q\v\u\0\8\e\t\0\7\h\o\8\o\6\z\h\j\u\j\h\x\0\d\0\l\5\i\e\c\g\2\o\c\d\8\l\o\5\0\6\t\k\6\6\p\u\8\a\7\f\n\b\t\z\l\w\y\a\r\e\6\w\7\5\4\x\h\k\5\u\7\x\q\x\o\c\r\u\5\b\d\p\9\1\8\z\4\1\h\v\7\t\f\g\i\f\p\k\v\g\n\7\4\j\y\q\o\t\7\y\t\w\y\s\r\6\e\b\4\a\9\6\j\s\q\l\a\g\6\o\c\s\b\l\7\3\d\w\5\g\t\o\z\i\5\1\1\s\i\v\h\8\6\4\w\6\t\h\t\9\6\c\2\g\g\6\h\l\5\n\y\q\3\9\i\w\d\h\e\3\k\x\m\o\w\d\d\o\g\b\3\e\7\k\f\3\f\1\7\3\c\v\4\c\v\v\2\a\d\l\a\s\d\8\t\9\o\3\i\d\c\8\9\4\o\8\h\j\m\5\c\3\b\m\p\x\6\c\k\i\z\f\k\a\4\l\c\s\s\y\3\e\m\g\n\q\x\h\r\t\w\s\m\p\a\j\3\j\1\q\y\m\1\k\c\x\5\7\3\v\a\n\3\7\4\2\8\i\q\b\k\o\i\y\q\6\q\q\x\9\i\5\v\9\8\k\z\q\1\6\p\u\7\f\o\3\e\3\w\0\y\n\v\u\4\t\c\u\8\s\6\f\i\8\q\f\w\4\3\d\b\i\o\0\p\s\0\e\a\j\1\7\x\6\j\g\u\l\k\8\n\o\1\v\s\5\d\p\k\9\o\3\t\j\s\y\g\t\4\e\a\u\r\9\2\r\8\t\p\0\d\m\9\e\j\6\o\0\a\q\c\b\p\9\5\l\m\n\h\s\s\m\5\0\5\b\2\g\w\a\u\7\o\f\e\j\z\e\b ]] 00:06:45.863 00:06:45.863 real 0m1.096s 00:06:45.863 user 0m0.532s 00:06:45.863 sys 0m0.323s 00:06:45.863 01:47:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.863 ************************************ 00:06:45.863 END TEST dd_flag_nofollow 00:06:45.863 ************************************ 00:06:45.863 01:47:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:46.122 01:47:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:46.122 01:47:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.123 01:47:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.123 01:47:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:46.123 ************************************ 00:06:46.123 START TEST dd_flag_noatime 00:06:46.123 ************************************ 00:06:46.123 01:47:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:06:46.123 01:47:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:46.123 01:47:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:46.123 01:47:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:46.123 01:47:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:46.123 01:47:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:46.123 01:47:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:46.123 01:47:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1731980876 00:06:46.123 01:47:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:46.123 01:47:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1731980876 00:06:46.123 01:47:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:47.058 01:47:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:47.058 [2024-11-19 01:47:57.591553] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:47.058 [2024-11-19 01:47:57.591661] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72313 ] 00:06:47.316 [2024-11-19 01:47:57.743969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.316 [2024-11-19 01:47:57.767142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.316 [2024-11-19 01:47:57.800080] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.316  [2024-11-19T01:47:57.931Z] Copying: 512/512 [B] (average 500 kBps) 00:06:47.316 00:06:47.316 01:47:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:47.316 01:47:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1731980876 )) 00:06:47.316 01:47:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:47.575 01:47:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1731980876 )) 00:06:47.575 01:47:57 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:47.575 [2024-11-19 01:47:57.986814] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:47.575 [2024-11-19 01:47:57.986913] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72326 ] 00:06:47.575 [2024-11-19 01:47:58.132020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.575 [2024-11-19 01:47:58.152325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.575 [2024-11-19 01:47:58.178201] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.833  [2024-11-19T01:47:58.448Z] Copying: 512/512 [B] (average 500 kBps) 00:06:47.833 00:06:47.833 01:47:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:47.833 01:47:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1731980878 )) 00:06:47.833 00:06:47.833 real 0m1.785s 00:06:47.833 user 0m0.376s 00:06:47.833 sys 0m0.344s 00:06:47.833 01:47:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.833 ************************************ 00:06:47.833 END TEST dd_flag_noatime 00:06:47.833 ************************************ 00:06:47.833 01:47:58 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:47.833 01:47:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:47.833 01:47:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.833 01:47:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.833 01:47:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:47.833 ************************************ 00:06:47.833 START TEST dd_flags_misc 00:06:47.833 ************************************ 00:06:47.833 01:47:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:06:47.833 01:47:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:47.833 01:47:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:47.833 01:47:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:47.833 01:47:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:47.833 01:47:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:47.833 01:47:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:47.833 01:47:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:47.833 01:47:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:47.833 01:47:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:47.833 [2024-11-19 01:47:58.400322] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:47.833 [2024-11-19 01:47:58.400422] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72355 ] 00:06:48.092 [2024-11-19 01:47:58.530789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.092 [2024-11-19 01:47:58.548655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.092 [2024-11-19 01:47:58.576828] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:48.092  [2024-11-19T01:47:58.707Z] Copying: 512/512 [B] (average 500 kBps) 00:06:48.092 00:06:48.092 01:47:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ fp2ndzwgnea44pnvrvqfp0y08lioji4ia97bipffr4vs2641m5c17umslul3ze5ct4h423trlfhc0gtdypujvho47258wda66rgu030zc55qrwofv6uhigw164ldwg8bm9gnehjr648rtzzlbvzlrs0y67eytdvgtalqfi4o3e4yradxpy1q04eqdkpj0hvnrjkzd26nym1eegfvgcfd871nmjerq5gpkayv1y6peuw0o3hlo0tw2xr7o669zdfqanp3e1w31syvvkshcld0o5i243abjfjits8e67dr6fzgf2qu7vpdk27br2zo3qftwni7zbpgpuqz1cvqjdkhfg44t6qmewqso8aywd0miuahkxozg2z6zzvdcwpxn3yedtalzf3qmvl17n7inwlsbqjlbgu0ahhppaq77lbght7mwxag36mbcn15nxtxk3z49f8kj8bguibhz3m1r8nq0fj6lo701qfzz2qs4m1bpdxsx39x38xrs7qziwb7j6wa == \f\p\2\n\d\z\w\g\n\e\a\4\4\p\n\v\r\v\q\f\p\0\y\0\8\l\i\o\j\i\4\i\a\9\7\b\i\p\f\f\r\4\v\s\2\6\4\1\m\5\c\1\7\u\m\s\l\u\l\3\z\e\5\c\t\4\h\4\2\3\t\r\l\f\h\c\0\g\t\d\y\p\u\j\v\h\o\4\7\2\5\8\w\d\a\6\6\r\g\u\0\3\0\z\c\5\5\q\r\w\o\f\v\6\u\h\i\g\w\1\6\4\l\d\w\g\8\b\m\9\g\n\e\h\j\r\6\4\8\r\t\z\z\l\b\v\z\l\r\s\0\y\6\7\e\y\t\d\v\g\t\a\l\q\f\i\4\o\3\e\4\y\r\a\d\x\p\y\1\q\0\4\e\q\d\k\p\j\0\h\v\n\r\j\k\z\d\2\6\n\y\m\1\e\e\g\f\v\g\c\f\d\8\7\1\n\m\j\e\r\q\5\g\p\k\a\y\v\1\y\6\p\e\u\w\0\o\3\h\l\o\0\t\w\2\x\r\7\o\6\6\9\z\d\f\q\a\n\p\3\e\1\w\3\1\s\y\v\v\k\s\h\c\l\d\0\o\5\i\2\4\3\a\b\j\f\j\i\t\s\8\e\6\7\d\r\6\f\z\g\f\2\q\u\7\v\p\d\k\2\7\b\r\2\z\o\3\q\f\t\w\n\i\7\z\b\p\g\p\u\q\z\1\c\v\q\j\d\k\h\f\g\4\4\t\6\q\m\e\w\q\s\o\8\a\y\w\d\0\m\i\u\a\h\k\x\o\z\g\2\z\6\z\z\v\d\c\w\p\x\n\3\y\e\d\t\a\l\z\f\3\q\m\v\l\1\7\n\7\i\n\w\l\s\b\q\j\l\b\g\u\0\a\h\h\p\p\a\q\7\7\l\b\g\h\t\7\m\w\x\a\g\3\6\m\b\c\n\1\5\n\x\t\x\k\3\z\4\9\f\8\k\j\8\b\g\u\i\b\h\z\3\m\1\r\8\n\q\0\f\j\6\l\o\7\0\1\q\f\z\z\2\q\s\4\m\1\b\p\d\x\s\x\3\9\x\3\8\x\r\s\7\q\z\i\w\b\7\j\6\w\a ]] 00:06:48.092 01:47:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:48.092 01:47:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:48.351 [2024-11-19 01:47:58.728693] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:48.351 [2024-11-19 01:47:58.728794] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72359 ] 00:06:48.351 [2024-11-19 01:47:58.865806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.351 [2024-11-19 01:47:58.887329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.351 [2024-11-19 01:47:58.916931] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:48.351  [2024-11-19T01:47:59.225Z] Copying: 512/512 [B] (average 500 kBps) 00:06:48.610 00:06:48.610 01:47:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ fp2ndzwgnea44pnvrvqfp0y08lioji4ia97bipffr4vs2641m5c17umslul3ze5ct4h423trlfhc0gtdypujvho47258wda66rgu030zc55qrwofv6uhigw164ldwg8bm9gnehjr648rtzzlbvzlrs0y67eytdvgtalqfi4o3e4yradxpy1q04eqdkpj0hvnrjkzd26nym1eegfvgcfd871nmjerq5gpkayv1y6peuw0o3hlo0tw2xr7o669zdfqanp3e1w31syvvkshcld0o5i243abjfjits8e67dr6fzgf2qu7vpdk27br2zo3qftwni7zbpgpuqz1cvqjdkhfg44t6qmewqso8aywd0miuahkxozg2z6zzvdcwpxn3yedtalzf3qmvl17n7inwlsbqjlbgu0ahhppaq77lbght7mwxag36mbcn15nxtxk3z49f8kj8bguibhz3m1r8nq0fj6lo701qfzz2qs4m1bpdxsx39x38xrs7qziwb7j6wa == \f\p\2\n\d\z\w\g\n\e\a\4\4\p\n\v\r\v\q\f\p\0\y\0\8\l\i\o\j\i\4\i\a\9\7\b\i\p\f\f\r\4\v\s\2\6\4\1\m\5\c\1\7\u\m\s\l\u\l\3\z\e\5\c\t\4\h\4\2\3\t\r\l\f\h\c\0\g\t\d\y\p\u\j\v\h\o\4\7\2\5\8\w\d\a\6\6\r\g\u\0\3\0\z\c\5\5\q\r\w\o\f\v\6\u\h\i\g\w\1\6\4\l\d\w\g\8\b\m\9\g\n\e\h\j\r\6\4\8\r\t\z\z\l\b\v\z\l\r\s\0\y\6\7\e\y\t\d\v\g\t\a\l\q\f\i\4\o\3\e\4\y\r\a\d\x\p\y\1\q\0\4\e\q\d\k\p\j\0\h\v\n\r\j\k\z\d\2\6\n\y\m\1\e\e\g\f\v\g\c\f\d\8\7\1\n\m\j\e\r\q\5\g\p\k\a\y\v\1\y\6\p\e\u\w\0\o\3\h\l\o\0\t\w\2\x\r\7\o\6\6\9\z\d\f\q\a\n\p\3\e\1\w\3\1\s\y\v\v\k\s\h\c\l\d\0\o\5\i\2\4\3\a\b\j\f\j\i\t\s\8\e\6\7\d\r\6\f\z\g\f\2\q\u\7\v\p\d\k\2\7\b\r\2\z\o\3\q\f\t\w\n\i\7\z\b\p\g\p\u\q\z\1\c\v\q\j\d\k\h\f\g\4\4\t\6\q\m\e\w\q\s\o\8\a\y\w\d\0\m\i\u\a\h\k\x\o\z\g\2\z\6\z\z\v\d\c\w\p\x\n\3\y\e\d\t\a\l\z\f\3\q\m\v\l\1\7\n\7\i\n\w\l\s\b\q\j\l\b\g\u\0\a\h\h\p\p\a\q\7\7\l\b\g\h\t\7\m\w\x\a\g\3\6\m\b\c\n\1\5\n\x\t\x\k\3\z\4\9\f\8\k\j\8\b\g\u\i\b\h\z\3\m\1\r\8\n\q\0\f\j\6\l\o\7\0\1\q\f\z\z\2\q\s\4\m\1\b\p\d\x\s\x\3\9\x\3\8\x\r\s\7\q\z\i\w\b\7\j\6\w\a ]] 00:06:48.610 01:47:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:48.610 01:47:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:48.610 [2024-11-19 01:47:59.082291] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:48.610 [2024-11-19 01:47:59.082387] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72368 ] 00:06:48.610 [2024-11-19 01:47:59.226445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.869 [2024-11-19 01:47:59.244478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.869 [2024-11-19 01:47:59.270745] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:48.869  [2024-11-19T01:47:59.484Z] Copying: 512/512 [B] (average 125 kBps) 00:06:48.869 00:06:48.870 01:47:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ fp2ndzwgnea44pnvrvqfp0y08lioji4ia97bipffr4vs2641m5c17umslul3ze5ct4h423trlfhc0gtdypujvho47258wda66rgu030zc55qrwofv6uhigw164ldwg8bm9gnehjr648rtzzlbvzlrs0y67eytdvgtalqfi4o3e4yradxpy1q04eqdkpj0hvnrjkzd26nym1eegfvgcfd871nmjerq5gpkayv1y6peuw0o3hlo0tw2xr7o669zdfqanp3e1w31syvvkshcld0o5i243abjfjits8e67dr6fzgf2qu7vpdk27br2zo3qftwni7zbpgpuqz1cvqjdkhfg44t6qmewqso8aywd0miuahkxozg2z6zzvdcwpxn3yedtalzf3qmvl17n7inwlsbqjlbgu0ahhppaq77lbght7mwxag36mbcn15nxtxk3z49f8kj8bguibhz3m1r8nq0fj6lo701qfzz2qs4m1bpdxsx39x38xrs7qziwb7j6wa == \f\p\2\n\d\z\w\g\n\e\a\4\4\p\n\v\r\v\q\f\p\0\y\0\8\l\i\o\j\i\4\i\a\9\7\b\i\p\f\f\r\4\v\s\2\6\4\1\m\5\c\1\7\u\m\s\l\u\l\3\z\e\5\c\t\4\h\4\2\3\t\r\l\f\h\c\0\g\t\d\y\p\u\j\v\h\o\4\7\2\5\8\w\d\a\6\6\r\g\u\0\3\0\z\c\5\5\q\r\w\o\f\v\6\u\h\i\g\w\1\6\4\l\d\w\g\8\b\m\9\g\n\e\h\j\r\6\4\8\r\t\z\z\l\b\v\z\l\r\s\0\y\6\7\e\y\t\d\v\g\t\a\l\q\f\i\4\o\3\e\4\y\r\a\d\x\p\y\1\q\0\4\e\q\d\k\p\j\0\h\v\n\r\j\k\z\d\2\6\n\y\m\1\e\e\g\f\v\g\c\f\d\8\7\1\n\m\j\e\r\q\5\g\p\k\a\y\v\1\y\6\p\e\u\w\0\o\3\h\l\o\0\t\w\2\x\r\7\o\6\6\9\z\d\f\q\a\n\p\3\e\1\w\3\1\s\y\v\v\k\s\h\c\l\d\0\o\5\i\2\4\3\a\b\j\f\j\i\t\s\8\e\6\7\d\r\6\f\z\g\f\2\q\u\7\v\p\d\k\2\7\b\r\2\z\o\3\q\f\t\w\n\i\7\z\b\p\g\p\u\q\z\1\c\v\q\j\d\k\h\f\g\4\4\t\6\q\m\e\w\q\s\o\8\a\y\w\d\0\m\i\u\a\h\k\x\o\z\g\2\z\6\z\z\v\d\c\w\p\x\n\3\y\e\d\t\a\l\z\f\3\q\m\v\l\1\7\n\7\i\n\w\l\s\b\q\j\l\b\g\u\0\a\h\h\p\p\a\q\7\7\l\b\g\h\t\7\m\w\x\a\g\3\6\m\b\c\n\1\5\n\x\t\x\k\3\z\4\9\f\8\k\j\8\b\g\u\i\b\h\z\3\m\1\r\8\n\q\0\f\j\6\l\o\7\0\1\q\f\z\z\2\q\s\4\m\1\b\p\d\x\s\x\3\9\x\3\8\x\r\s\7\q\z\i\w\b\7\j\6\w\a ]] 00:06:48.870 01:47:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:48.870 01:47:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:48.870 [2024-11-19 01:47:59.447515] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:48.870 [2024-11-19 01:47:59.447607] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72378 ] 00:06:49.130 [2024-11-19 01:47:59.586745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.130 [2024-11-19 01:47:59.604036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.130 [2024-11-19 01:47:59.630030] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:49.130  [2024-11-19T01:47:59.745Z] Copying: 512/512 [B] (average 250 kBps) 00:06:49.130 00:06:49.130 01:47:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ fp2ndzwgnea44pnvrvqfp0y08lioji4ia97bipffr4vs2641m5c17umslul3ze5ct4h423trlfhc0gtdypujvho47258wda66rgu030zc55qrwofv6uhigw164ldwg8bm9gnehjr648rtzzlbvzlrs0y67eytdvgtalqfi4o3e4yradxpy1q04eqdkpj0hvnrjkzd26nym1eegfvgcfd871nmjerq5gpkayv1y6peuw0o3hlo0tw2xr7o669zdfqanp3e1w31syvvkshcld0o5i243abjfjits8e67dr6fzgf2qu7vpdk27br2zo3qftwni7zbpgpuqz1cvqjdkhfg44t6qmewqso8aywd0miuahkxozg2z6zzvdcwpxn3yedtalzf3qmvl17n7inwlsbqjlbgu0ahhppaq77lbght7mwxag36mbcn15nxtxk3z49f8kj8bguibhz3m1r8nq0fj6lo701qfzz2qs4m1bpdxsx39x38xrs7qziwb7j6wa == \f\p\2\n\d\z\w\g\n\e\a\4\4\p\n\v\r\v\q\f\p\0\y\0\8\l\i\o\j\i\4\i\a\9\7\b\i\p\f\f\r\4\v\s\2\6\4\1\m\5\c\1\7\u\m\s\l\u\l\3\z\e\5\c\t\4\h\4\2\3\t\r\l\f\h\c\0\g\t\d\y\p\u\j\v\h\o\4\7\2\5\8\w\d\a\6\6\r\g\u\0\3\0\z\c\5\5\q\r\w\o\f\v\6\u\h\i\g\w\1\6\4\l\d\w\g\8\b\m\9\g\n\e\h\j\r\6\4\8\r\t\z\z\l\b\v\z\l\r\s\0\y\6\7\e\y\t\d\v\g\t\a\l\q\f\i\4\o\3\e\4\y\r\a\d\x\p\y\1\q\0\4\e\q\d\k\p\j\0\h\v\n\r\j\k\z\d\2\6\n\y\m\1\e\e\g\f\v\g\c\f\d\8\7\1\n\m\j\e\r\q\5\g\p\k\a\y\v\1\y\6\p\e\u\w\0\o\3\h\l\o\0\t\w\2\x\r\7\o\6\6\9\z\d\f\q\a\n\p\3\e\1\w\3\1\s\y\v\v\k\s\h\c\l\d\0\o\5\i\2\4\3\a\b\j\f\j\i\t\s\8\e\6\7\d\r\6\f\z\g\f\2\q\u\7\v\p\d\k\2\7\b\r\2\z\o\3\q\f\t\w\n\i\7\z\b\p\g\p\u\q\z\1\c\v\q\j\d\k\h\f\g\4\4\t\6\q\m\e\w\q\s\o\8\a\y\w\d\0\m\i\u\a\h\k\x\o\z\g\2\z\6\z\z\v\d\c\w\p\x\n\3\y\e\d\t\a\l\z\f\3\q\m\v\l\1\7\n\7\i\n\w\l\s\b\q\j\l\b\g\u\0\a\h\h\p\p\a\q\7\7\l\b\g\h\t\7\m\w\x\a\g\3\6\m\b\c\n\1\5\n\x\t\x\k\3\z\4\9\f\8\k\j\8\b\g\u\i\b\h\z\3\m\1\r\8\n\q\0\f\j\6\l\o\7\0\1\q\f\z\z\2\q\s\4\m\1\b\p\d\x\s\x\3\9\x\3\8\x\r\s\7\q\z\i\w\b\7\j\6\w\a ]] 00:06:49.130 01:47:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:49.130 01:47:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:49.130 01:47:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:49.130 01:47:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:49.389 01:47:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:49.390 01:47:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:49.390 [2024-11-19 01:47:59.790884] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:49.390 [2024-11-19 01:47:59.790984] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72382 ] 00:06:49.390 [2024-11-19 01:47:59.928373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.390 [2024-11-19 01:47:59.949674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.390 [2024-11-19 01:47:59.976624] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:49.390  [2024-11-19T01:48:00.263Z] Copying: 512/512 [B] (average 500 kBps) 00:06:49.648 00:06:49.648 01:48:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 2re72hf9c7zr1c1kw29h2smbohnog2kjnw8sir49d59pwolxml1rfrrquurlmkueqepso4ps5hh1crcq1tpz9z3iqqcy8c7qi5y5lqpc42e73lxv76si37djhuu0t5rwnux6ufj2xuug0hhmsslsruy6usqg149iootphwdbm7uwx6lfpl17xhujaqndj42aguwd6q8qp8wc96n5xb9nnrl8vdohdn8oy6t9rm9jd95snzw201j2zx2tu0um4o5axn811vjs4vlze4knj0420xi6vncv5xyzu0ufbzw43d20ei9gxkwv15mo3q1s0gol2fq8lqms8t7y3ctlavcvq1mwfekirfk0ip67f79f2j9yqkf4azczftp3cvuamqajdy9a877wxpbx2bc5dh350kfh6d4jdygqv39j9prepge9z9tqjt3f181pwvbe2djxwi93gozmzttjiroos1ji7qnl3934amaf80dodtth766rndnqhnmn1ebcnk2pgu6n == \2\r\e\7\2\h\f\9\c\7\z\r\1\c\1\k\w\2\9\h\2\s\m\b\o\h\n\o\g\2\k\j\n\w\8\s\i\r\4\9\d\5\9\p\w\o\l\x\m\l\1\r\f\r\r\q\u\u\r\l\m\k\u\e\q\e\p\s\o\4\p\s\5\h\h\1\c\r\c\q\1\t\p\z\9\z\3\i\q\q\c\y\8\c\7\q\i\5\y\5\l\q\p\c\4\2\e\7\3\l\x\v\7\6\s\i\3\7\d\j\h\u\u\0\t\5\r\w\n\u\x\6\u\f\j\2\x\u\u\g\0\h\h\m\s\s\l\s\r\u\y\6\u\s\q\g\1\4\9\i\o\o\t\p\h\w\d\b\m\7\u\w\x\6\l\f\p\l\1\7\x\h\u\j\a\q\n\d\j\4\2\a\g\u\w\d\6\q\8\q\p\8\w\c\9\6\n\5\x\b\9\n\n\r\l\8\v\d\o\h\d\n\8\o\y\6\t\9\r\m\9\j\d\9\5\s\n\z\w\2\0\1\j\2\z\x\2\t\u\0\u\m\4\o\5\a\x\n\8\1\1\v\j\s\4\v\l\z\e\4\k\n\j\0\4\2\0\x\i\6\v\n\c\v\5\x\y\z\u\0\u\f\b\z\w\4\3\d\2\0\e\i\9\g\x\k\w\v\1\5\m\o\3\q\1\s\0\g\o\l\2\f\q\8\l\q\m\s\8\t\7\y\3\c\t\l\a\v\c\v\q\1\m\w\f\e\k\i\r\f\k\0\i\p\6\7\f\7\9\f\2\j\9\y\q\k\f\4\a\z\c\z\f\t\p\3\c\v\u\a\m\q\a\j\d\y\9\a\8\7\7\w\x\p\b\x\2\b\c\5\d\h\3\5\0\k\f\h\6\d\4\j\d\y\g\q\v\3\9\j\9\p\r\e\p\g\e\9\z\9\t\q\j\t\3\f\1\8\1\p\w\v\b\e\2\d\j\x\w\i\9\3\g\o\z\m\z\t\t\j\i\r\o\o\s\1\j\i\7\q\n\l\3\9\3\4\a\m\a\f\8\0\d\o\d\t\t\h\7\6\6\r\n\d\n\q\h\n\m\n\1\e\b\c\n\k\2\p\g\u\6\n ]] 00:06:49.648 01:48:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:49.648 01:48:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:49.648 [2024-11-19 01:48:00.134464] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:49.648 [2024-11-19 01:48:00.134629] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72391 ] 00:06:49.648 [2024-11-19 01:48:00.265700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.907 [2024-11-19 01:48:00.284789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.907 [2024-11-19 01:48:00.310721] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:49.907  [2024-11-19T01:48:00.522Z] Copying: 512/512 [B] (average 500 kBps) 00:06:49.907 00:06:49.907 01:48:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 2re72hf9c7zr1c1kw29h2smbohnog2kjnw8sir49d59pwolxml1rfrrquurlmkueqepso4ps5hh1crcq1tpz9z3iqqcy8c7qi5y5lqpc42e73lxv76si37djhuu0t5rwnux6ufj2xuug0hhmsslsruy6usqg149iootphwdbm7uwx6lfpl17xhujaqndj42aguwd6q8qp8wc96n5xb9nnrl8vdohdn8oy6t9rm9jd95snzw201j2zx2tu0um4o5axn811vjs4vlze4knj0420xi6vncv5xyzu0ufbzw43d20ei9gxkwv15mo3q1s0gol2fq8lqms8t7y3ctlavcvq1mwfekirfk0ip67f79f2j9yqkf4azczftp3cvuamqajdy9a877wxpbx2bc5dh350kfh6d4jdygqv39j9prepge9z9tqjt3f181pwvbe2djxwi93gozmzttjiroos1ji7qnl3934amaf80dodtth766rndnqhnmn1ebcnk2pgu6n == \2\r\e\7\2\h\f\9\c\7\z\r\1\c\1\k\w\2\9\h\2\s\m\b\o\h\n\o\g\2\k\j\n\w\8\s\i\r\4\9\d\5\9\p\w\o\l\x\m\l\1\r\f\r\r\q\u\u\r\l\m\k\u\e\q\e\p\s\o\4\p\s\5\h\h\1\c\r\c\q\1\t\p\z\9\z\3\i\q\q\c\y\8\c\7\q\i\5\y\5\l\q\p\c\4\2\e\7\3\l\x\v\7\6\s\i\3\7\d\j\h\u\u\0\t\5\r\w\n\u\x\6\u\f\j\2\x\u\u\g\0\h\h\m\s\s\l\s\r\u\y\6\u\s\q\g\1\4\9\i\o\o\t\p\h\w\d\b\m\7\u\w\x\6\l\f\p\l\1\7\x\h\u\j\a\q\n\d\j\4\2\a\g\u\w\d\6\q\8\q\p\8\w\c\9\6\n\5\x\b\9\n\n\r\l\8\v\d\o\h\d\n\8\o\y\6\t\9\r\m\9\j\d\9\5\s\n\z\w\2\0\1\j\2\z\x\2\t\u\0\u\m\4\o\5\a\x\n\8\1\1\v\j\s\4\v\l\z\e\4\k\n\j\0\4\2\0\x\i\6\v\n\c\v\5\x\y\z\u\0\u\f\b\z\w\4\3\d\2\0\e\i\9\g\x\k\w\v\1\5\m\o\3\q\1\s\0\g\o\l\2\f\q\8\l\q\m\s\8\t\7\y\3\c\t\l\a\v\c\v\q\1\m\w\f\e\k\i\r\f\k\0\i\p\6\7\f\7\9\f\2\j\9\y\q\k\f\4\a\z\c\z\f\t\p\3\c\v\u\a\m\q\a\j\d\y\9\a\8\7\7\w\x\p\b\x\2\b\c\5\d\h\3\5\0\k\f\h\6\d\4\j\d\y\g\q\v\3\9\j\9\p\r\e\p\g\e\9\z\9\t\q\j\t\3\f\1\8\1\p\w\v\b\e\2\d\j\x\w\i\9\3\g\o\z\m\z\t\t\j\i\r\o\o\s\1\j\i\7\q\n\l\3\9\3\4\a\m\a\f\8\0\d\o\d\t\t\h\7\6\6\r\n\d\n\q\h\n\m\n\1\e\b\c\n\k\2\p\g\u\6\n ]] 00:06:49.907 01:48:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:49.907 01:48:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:49.907 [2024-11-19 01:48:00.464425] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:49.907 [2024-11-19 01:48:00.464536] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72401 ] 00:06:50.166 [2024-11-19 01:48:00.598188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.166 [2024-11-19 01:48:00.616809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.166 [2024-11-19 01:48:00.644740] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:50.166  [2024-11-19T01:48:00.781Z] Copying: 512/512 [B] (average 500 kBps) 00:06:50.166 00:06:50.166 01:48:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 2re72hf9c7zr1c1kw29h2smbohnog2kjnw8sir49d59pwolxml1rfrrquurlmkueqepso4ps5hh1crcq1tpz9z3iqqcy8c7qi5y5lqpc42e73lxv76si37djhuu0t5rwnux6ufj2xuug0hhmsslsruy6usqg149iootphwdbm7uwx6lfpl17xhujaqndj42aguwd6q8qp8wc96n5xb9nnrl8vdohdn8oy6t9rm9jd95snzw201j2zx2tu0um4o5axn811vjs4vlze4knj0420xi6vncv5xyzu0ufbzw43d20ei9gxkwv15mo3q1s0gol2fq8lqms8t7y3ctlavcvq1mwfekirfk0ip67f79f2j9yqkf4azczftp3cvuamqajdy9a877wxpbx2bc5dh350kfh6d4jdygqv39j9prepge9z9tqjt3f181pwvbe2djxwi93gozmzttjiroos1ji7qnl3934amaf80dodtth766rndnqhnmn1ebcnk2pgu6n == \2\r\e\7\2\h\f\9\c\7\z\r\1\c\1\k\w\2\9\h\2\s\m\b\o\h\n\o\g\2\k\j\n\w\8\s\i\r\4\9\d\5\9\p\w\o\l\x\m\l\1\r\f\r\r\q\u\u\r\l\m\k\u\e\q\e\p\s\o\4\p\s\5\h\h\1\c\r\c\q\1\t\p\z\9\z\3\i\q\q\c\y\8\c\7\q\i\5\y\5\l\q\p\c\4\2\e\7\3\l\x\v\7\6\s\i\3\7\d\j\h\u\u\0\t\5\r\w\n\u\x\6\u\f\j\2\x\u\u\g\0\h\h\m\s\s\l\s\r\u\y\6\u\s\q\g\1\4\9\i\o\o\t\p\h\w\d\b\m\7\u\w\x\6\l\f\p\l\1\7\x\h\u\j\a\q\n\d\j\4\2\a\g\u\w\d\6\q\8\q\p\8\w\c\9\6\n\5\x\b\9\n\n\r\l\8\v\d\o\h\d\n\8\o\y\6\t\9\r\m\9\j\d\9\5\s\n\z\w\2\0\1\j\2\z\x\2\t\u\0\u\m\4\o\5\a\x\n\8\1\1\v\j\s\4\v\l\z\e\4\k\n\j\0\4\2\0\x\i\6\v\n\c\v\5\x\y\z\u\0\u\f\b\z\w\4\3\d\2\0\e\i\9\g\x\k\w\v\1\5\m\o\3\q\1\s\0\g\o\l\2\f\q\8\l\q\m\s\8\t\7\y\3\c\t\l\a\v\c\v\q\1\m\w\f\e\k\i\r\f\k\0\i\p\6\7\f\7\9\f\2\j\9\y\q\k\f\4\a\z\c\z\f\t\p\3\c\v\u\a\m\q\a\j\d\y\9\a\8\7\7\w\x\p\b\x\2\b\c\5\d\h\3\5\0\k\f\h\6\d\4\j\d\y\g\q\v\3\9\j\9\p\r\e\p\g\e\9\z\9\t\q\j\t\3\f\1\8\1\p\w\v\b\e\2\d\j\x\w\i\9\3\g\o\z\m\z\t\t\j\i\r\o\o\s\1\j\i\7\q\n\l\3\9\3\4\a\m\a\f\8\0\d\o\d\t\t\h\7\6\6\r\n\d\n\q\h\n\m\n\1\e\b\c\n\k\2\p\g\u\6\n ]] 00:06:50.166 01:48:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:50.166 01:48:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:50.425 [2024-11-19 01:48:00.815993] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:50.425 [2024-11-19 01:48:00.816087] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72405 ] 00:06:50.425 [2024-11-19 01:48:00.958041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.425 [2024-11-19 01:48:00.978608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.425 [2024-11-19 01:48:01.006000] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:50.425  [2024-11-19T01:48:01.299Z] Copying: 512/512 [B] (average 250 kBps) 00:06:50.684 00:06:50.684 01:48:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 2re72hf9c7zr1c1kw29h2smbohnog2kjnw8sir49d59pwolxml1rfrrquurlmkueqepso4ps5hh1crcq1tpz9z3iqqcy8c7qi5y5lqpc42e73lxv76si37djhuu0t5rwnux6ufj2xuug0hhmsslsruy6usqg149iootphwdbm7uwx6lfpl17xhujaqndj42aguwd6q8qp8wc96n5xb9nnrl8vdohdn8oy6t9rm9jd95snzw201j2zx2tu0um4o5axn811vjs4vlze4knj0420xi6vncv5xyzu0ufbzw43d20ei9gxkwv15mo3q1s0gol2fq8lqms8t7y3ctlavcvq1mwfekirfk0ip67f79f2j9yqkf4azczftp3cvuamqajdy9a877wxpbx2bc5dh350kfh6d4jdygqv39j9prepge9z9tqjt3f181pwvbe2djxwi93gozmzttjiroos1ji7qnl3934amaf80dodtth766rndnqhnmn1ebcnk2pgu6n == \2\r\e\7\2\h\f\9\c\7\z\r\1\c\1\k\w\2\9\h\2\s\m\b\o\h\n\o\g\2\k\j\n\w\8\s\i\r\4\9\d\5\9\p\w\o\l\x\m\l\1\r\f\r\r\q\u\u\r\l\m\k\u\e\q\e\p\s\o\4\p\s\5\h\h\1\c\r\c\q\1\t\p\z\9\z\3\i\q\q\c\y\8\c\7\q\i\5\y\5\l\q\p\c\4\2\e\7\3\l\x\v\7\6\s\i\3\7\d\j\h\u\u\0\t\5\r\w\n\u\x\6\u\f\j\2\x\u\u\g\0\h\h\m\s\s\l\s\r\u\y\6\u\s\q\g\1\4\9\i\o\o\t\p\h\w\d\b\m\7\u\w\x\6\l\f\p\l\1\7\x\h\u\j\a\q\n\d\j\4\2\a\g\u\w\d\6\q\8\q\p\8\w\c\9\6\n\5\x\b\9\n\n\r\l\8\v\d\o\h\d\n\8\o\y\6\t\9\r\m\9\j\d\9\5\s\n\z\w\2\0\1\j\2\z\x\2\t\u\0\u\m\4\o\5\a\x\n\8\1\1\v\j\s\4\v\l\z\e\4\k\n\j\0\4\2\0\x\i\6\v\n\c\v\5\x\y\z\u\0\u\f\b\z\w\4\3\d\2\0\e\i\9\g\x\k\w\v\1\5\m\o\3\q\1\s\0\g\o\l\2\f\q\8\l\q\m\s\8\t\7\y\3\c\t\l\a\v\c\v\q\1\m\w\f\e\k\i\r\f\k\0\i\p\6\7\f\7\9\f\2\j\9\y\q\k\f\4\a\z\c\z\f\t\p\3\c\v\u\a\m\q\a\j\d\y\9\a\8\7\7\w\x\p\b\x\2\b\c\5\d\h\3\5\0\k\f\h\6\d\4\j\d\y\g\q\v\3\9\j\9\p\r\e\p\g\e\9\z\9\t\q\j\t\3\f\1\8\1\p\w\v\b\e\2\d\j\x\w\i\9\3\g\o\z\m\z\t\t\j\i\r\o\o\s\1\j\i\7\q\n\l\3\9\3\4\a\m\a\f\8\0\d\o\d\t\t\h\7\6\6\r\n\d\n\q\h\n\m\n\1\e\b\c\n\k\2\p\g\u\6\n ]] 00:06:50.684 00:06:50.684 real 0m2.770s 00:06:50.684 user 0m1.295s 00:06:50.684 sys 0m1.204s 00:06:50.684 01:48:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.684 ************************************ 00:06:50.684 END TEST dd_flags_misc 00:06:50.684 ************************************ 00:06:50.684 01:48:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:50.684 01:48:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:50.684 01:48:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:50.684 * Second test run, disabling liburing, forcing AIO 00:06:50.684 01:48:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:50.684 01:48:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:50.684 01:48:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.684 01:48:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.684 01:48:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:50.684 ************************************ 00:06:50.684 START TEST dd_flag_append_forced_aio 00:06:50.684 ************************************ 00:06:50.685 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:06:50.685 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:50.685 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:50.685 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:50.685 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:50.685 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:50.685 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=vevf9rahxz6bs776eujcvo3xkqjj00hn 00:06:50.685 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:50.685 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:50.685 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:50.685 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=5q1tvq7d44q1qi9w2dvohvgamtkt7i18 00:06:50.685 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s vevf9rahxz6bs776eujcvo3xkqjj00hn 00:06:50.685 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s 5q1tvq7d44q1qi9w2dvohvgamtkt7i18 00:06:50.685 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:50.685 [2024-11-19 01:48:01.239877] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:50.685 [2024-11-19 01:48:01.239975] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72439 ] 00:06:50.957 [2024-11-19 01:48:01.386165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.957 [2024-11-19 01:48:01.404059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.957 [2024-11-19 01:48:01.430293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:50.957  [2024-11-19T01:48:01.572Z] Copying: 32/32 [B] (average 31 kBps) 00:06:50.957 00:06:50.957 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ 5q1tvq7d44q1qi9w2dvohvgamtkt7i18vevf9rahxz6bs776eujcvo3xkqjj00hn == \5\q\1\t\v\q\7\d\4\4\q\1\q\i\9\w\2\d\v\o\h\v\g\a\m\t\k\t\7\i\1\8\v\e\v\f\9\r\a\h\x\z\6\b\s\7\7\6\e\u\j\c\v\o\3\x\k\q\j\j\0\0\h\n ]] 00:06:50.957 00:06:50.957 real 0m0.389s 00:06:50.957 user 0m0.182s 00:06:50.957 sys 0m0.089s 00:06:50.957 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.957 ************************************ 00:06:50.957 END TEST dd_flag_append_forced_aio 00:06:50.957 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:50.957 ************************************ 00:06:51.217 01:48:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:51.217 01:48:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.217 01:48:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.217 01:48:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:51.217 ************************************ 00:06:51.217 START TEST dd_flag_directory_forced_aio 00:06:51.217 ************************************ 00:06:51.217 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:06:51.217 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:51.217 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:51.217 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:51.217 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.217 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.217 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.217 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.217 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.217 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.217 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.217 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:51.217 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:51.217 [2024-11-19 01:48:01.673697] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:51.217 [2024-11-19 01:48:01.673807] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72460 ] 00:06:51.217 [2024-11-19 01:48:01.819826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.477 [2024-11-19 01:48:01.839543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.477 [2024-11-19 01:48:01.866741] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:51.477 [2024-11-19 01:48:01.881640] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:51.477 [2024-11-19 01:48:01.881709] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:51.477 [2024-11-19 01:48:01.881742] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:51.477 [2024-11-19 01:48:01.940302] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:51.477 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:06:51.477 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:51.477 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:06:51.477 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:51.477 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:51.477 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:51.477 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:51.477 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:51.477 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:51.477 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.477 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.477 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.477 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.477 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.477 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.477 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.477 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:51.477 01:48:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:51.477 [2024-11-19 01:48:02.034551] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:51.477 [2024-11-19 01:48:02.034664] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72465 ] 00:06:51.736 [2024-11-19 01:48:02.170252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.736 [2024-11-19 01:48:02.191876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.736 [2024-11-19 01:48:02.220084] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:51.736 [2024-11-19 01:48:02.235100] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:51.736 [2024-11-19 01:48:02.235165] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:51.736 [2024-11-19 01:48:02.235199] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:51.736 [2024-11-19 01:48:02.295141] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:51.736 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:06:51.736 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:51.736 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:06:51.736 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:51.736 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:51.736 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:51.736 00:06:51.736 real 0m0.725s 00:06:51.736 user 0m0.325s 00:06:51.736 sys 0m0.193s 00:06:51.736 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.736 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:51.736 ************************************ 00:06:51.736 END TEST dd_flag_directory_forced_aio 00:06:51.736 ************************************ 00:06:51.995 01:48:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:51.995 01:48:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.995 01:48:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.995 01:48:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:51.995 ************************************ 00:06:51.995 START TEST dd_flag_nofollow_forced_aio 00:06:51.995 ************************************ 00:06:51.995 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:06:51.995 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:51.995 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:51.995 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:51.995 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:51.995 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:51.995 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:51.995 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:51.995 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.995 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.995 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.995 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.995 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.995 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:51.995 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:51.995 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:51.995 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:51.995 [2024-11-19 01:48:02.447312] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:51.995 [2024-11-19 01:48:02.447436] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72498 ] 00:06:51.995 [2024-11-19 01:48:02.587655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.995 [2024-11-19 01:48:02.606363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.255 [2024-11-19 01:48:02.633963] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:52.255 [2024-11-19 01:48:02.648104] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:52.255 [2024-11-19 01:48:02.648167] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:52.255 [2024-11-19 01:48:02.648200] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.255 [2024-11-19 01:48:02.706523] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:52.255 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:06:52.255 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:52.255 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:06:52.255 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:52.255 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:52.255 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:52.255 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:52.255 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:52.255 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:52.255 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.255 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.255 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.255 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.255 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.255 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.255 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.255 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:52.255 01:48:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:52.255 [2024-11-19 01:48:02.792307] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:52.255 [2024-11-19 01:48:02.792409] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72502 ] 00:06:52.515 [2024-11-19 01:48:02.927807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.515 [2024-11-19 01:48:02.946437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.515 [2024-11-19 01:48:02.973571] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:52.515 [2024-11-19 01:48:02.989293] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:52.515 [2024-11-19 01:48:02.989360] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:52.515 [2024-11-19 01:48:02.989395] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.515 [2024-11-19 01:48:03.049010] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:52.515 01:48:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:06:52.515 01:48:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:52.515 01:48:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:06:52.515 01:48:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:52.515 01:48:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:52.515 01:48:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:52.515 01:48:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:06:52.515 01:48:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:52.515 01:48:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:52.515 01:48:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:52.775 [2024-11-19 01:48:03.153835] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:52.775 [2024-11-19 01:48:03.153941] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72509 ] 00:06:52.775 [2024-11-19 01:48:03.297866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.775 [2024-11-19 01:48:03.316616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.775 [2024-11-19 01:48:03.346591] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:52.775  [2024-11-19T01:48:03.649Z] Copying: 512/512 [B] (average 500 kBps) 00:06:53.034 00:06:53.034 01:48:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ vl9pzyxlmxkzz6185hdphi9z73x8da30oki24qpu1liyagqgchcemi25wiwbywqmsemd8v53nb6su0kuf62sarc2zdgnffbm1jbhnjtlu5lai9zipfzja2xmesyvq1gq3w00eot2xhuhza1si8ot94hn4jus13moqseafnh8uqe0zbqcccvb8vfovml9a9j1pvayi83r3uadm9rrkcoozjqht0wy4nf4pe5lttl7budnunqmhvzl4ttd98hy5mumj8msknomo847nzo346dechl380etfsudn1dnjaan89wysiy4dzc0xgalje90ivkjdekp9h6j5e63uldl7331yhmx25it25qq8jcel33pvkgn84jz95zx9i8do7b8rwwgbwk7w6ew9055pgqjrk5fahbkojtzz3zla4p403rk8cxrkr5x2g9cocnbaz42b6uxw7hjoi3u5rwdl8kfvh2jlqshd4xt26motqkixcypz4f0wgql81orfwlel9cz4a7o == \v\l\9\p\z\y\x\l\m\x\k\z\z\6\1\8\5\h\d\p\h\i\9\z\7\3\x\8\d\a\3\0\o\k\i\2\4\q\p\u\1\l\i\y\a\g\q\g\c\h\c\e\m\i\2\5\w\i\w\b\y\w\q\m\s\e\m\d\8\v\5\3\n\b\6\s\u\0\k\u\f\6\2\s\a\r\c\2\z\d\g\n\f\f\b\m\1\j\b\h\n\j\t\l\u\5\l\a\i\9\z\i\p\f\z\j\a\2\x\m\e\s\y\v\q\1\g\q\3\w\0\0\e\o\t\2\x\h\u\h\z\a\1\s\i\8\o\t\9\4\h\n\4\j\u\s\1\3\m\o\q\s\e\a\f\n\h\8\u\q\e\0\z\b\q\c\c\c\v\b\8\v\f\o\v\m\l\9\a\9\j\1\p\v\a\y\i\8\3\r\3\u\a\d\m\9\r\r\k\c\o\o\z\j\q\h\t\0\w\y\4\n\f\4\p\e\5\l\t\t\l\7\b\u\d\n\u\n\q\m\h\v\z\l\4\t\t\d\9\8\h\y\5\m\u\m\j\8\m\s\k\n\o\m\o\8\4\7\n\z\o\3\4\6\d\e\c\h\l\3\8\0\e\t\f\s\u\d\n\1\d\n\j\a\a\n\8\9\w\y\s\i\y\4\d\z\c\0\x\g\a\l\j\e\9\0\i\v\k\j\d\e\k\p\9\h\6\j\5\e\6\3\u\l\d\l\7\3\3\1\y\h\m\x\2\5\i\t\2\5\q\q\8\j\c\e\l\3\3\p\v\k\g\n\8\4\j\z\9\5\z\x\9\i\8\d\o\7\b\8\r\w\w\g\b\w\k\7\w\6\e\w\9\0\5\5\p\g\q\j\r\k\5\f\a\h\b\k\o\j\t\z\z\3\z\l\a\4\p\4\0\3\r\k\8\c\x\r\k\r\5\x\2\g\9\c\o\c\n\b\a\z\4\2\b\6\u\x\w\7\h\j\o\i\3\u\5\r\w\d\l\8\k\f\v\h\2\j\l\q\s\h\d\4\x\t\2\6\m\o\t\q\k\i\x\c\y\p\z\4\f\0\w\g\q\l\8\1\o\r\f\w\l\e\l\9\c\z\4\a\7\o ]] 00:06:53.034 00:06:53.034 real 0m1.090s 00:06:53.034 user 0m0.507s 00:06:53.034 sys 0m0.259s 00:06:53.034 01:48:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.034 ************************************ 00:06:53.034 01:48:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:53.034 END TEST dd_flag_nofollow_forced_aio 00:06:53.034 ************************************ 00:06:53.034 01:48:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:53.035 01:48:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.035 01:48:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.035 01:48:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:53.035 ************************************ 00:06:53.035 START TEST dd_flag_noatime_forced_aio 00:06:53.035 ************************************ 00:06:53.035 01:48:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:06:53.035 01:48:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:06:53.035 01:48:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:06:53.035 01:48:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:06:53.035 01:48:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:53.035 01:48:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:53.035 01:48:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:53.035 01:48:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1731980883 00:06:53.035 01:48:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:53.035 01:48:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1731980883 00:06:53.035 01:48:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:06:53.972 01:48:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:54.231 [2024-11-19 01:48:04.615856] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:54.231 [2024-11-19 01:48:04.615961] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72550 ] 00:06:54.231 [2024-11-19 01:48:04.768216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.231 [2024-11-19 01:48:04.791371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.231 [2024-11-19 01:48:04.823121] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:54.231  [2024-11-19T01:48:05.105Z] Copying: 512/512 [B] (average 500 kBps) 00:06:54.490 00:06:54.490 01:48:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:54.490 01:48:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1731980883 )) 00:06:54.490 01:48:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:54.490 01:48:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1731980883 )) 00:06:54.490 01:48:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:54.490 [2024-11-19 01:48:05.033775] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:54.490 [2024-11-19 01:48:05.033873] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72556 ] 00:06:54.749 [2024-11-19 01:48:05.185600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.749 [2024-11-19 01:48:05.209644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.749 [2024-11-19 01:48:05.243153] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:54.749  [2024-11-19T01:48:05.624Z] Copying: 512/512 [B] (average 500 kBps) 00:06:55.009 00:06:55.009 01:48:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:55.009 01:48:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1731980885 )) 00:06:55.009 00:06:55.009 real 0m1.872s 00:06:55.009 user 0m0.427s 00:06:55.009 sys 0m0.204s 00:06:55.009 01:48:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.009 01:48:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:55.009 ************************************ 00:06:55.009 END TEST dd_flag_noatime_forced_aio 00:06:55.009 ************************************ 00:06:55.009 01:48:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:55.009 01:48:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.009 01:48:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.009 01:48:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:55.009 ************************************ 00:06:55.009 START TEST dd_flags_misc_forced_aio 00:06:55.009 ************************************ 00:06:55.009 01:48:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:06:55.009 01:48:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:55.009 01:48:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:55.009 01:48:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:55.009 01:48:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:55.009 01:48:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:55.009 01:48:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:55.009 01:48:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:55.009 01:48:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:55.009 01:48:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:55.009 [2024-11-19 01:48:05.518727] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:55.009 [2024-11-19 01:48:05.518828] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72588 ] 00:06:55.269 [2024-11-19 01:48:05.669692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.269 [2024-11-19 01:48:05.692923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.269 [2024-11-19 01:48:05.724707] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.269  [2024-11-19T01:48:05.884Z] Copying: 512/512 [B] (average 500 kBps) 00:06:55.269 00:06:55.269 01:48:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ hq5tn0udpj0srq3x0g0o04vxd8q5kn3gahjuq7ecehtvxqyl5vbbbahic04sohukeyfuhtnivq5juv01yhbre5exqrqp55ty5p48ufqe81wbh75cgewpwbrnz1quqccvilh56rcq7qvn12qr9rzjjsgt0cn9i8jwtu4bfvyld5ecozgtkcweclba0needc3w1ftyn2iwvn0hu6b3k8jhv361s9owvvweovvt35o587mwuiiouvxcyi9r8ze0t2nciym1lswqthaidn3y82fjw2w273in3n9lmhi8rvlocm77lwtxgv1zs2r1hjwagnja3zkxq1i681l8r5wfv2hs872zhzxe6mezb61l9ly0yaarnb70u7qrg8ld72wst71k487a8stvb0ppl6qu54eah5daf91shvwagpkjtv3ueu1t43a6tsd0nz5dijdqlq47hn33gsaz3lwv04z5amugof6lwbo1siu13cfosr40ha0h0kss7usuycghl6u3kh4i == \h\q\5\t\n\0\u\d\p\j\0\s\r\q\3\x\0\g\0\o\0\4\v\x\d\8\q\5\k\n\3\g\a\h\j\u\q\7\e\c\e\h\t\v\x\q\y\l\5\v\b\b\b\a\h\i\c\0\4\s\o\h\u\k\e\y\f\u\h\t\n\i\v\q\5\j\u\v\0\1\y\h\b\r\e\5\e\x\q\r\q\p\5\5\t\y\5\p\4\8\u\f\q\e\8\1\w\b\h\7\5\c\g\e\w\p\w\b\r\n\z\1\q\u\q\c\c\v\i\l\h\5\6\r\c\q\7\q\v\n\1\2\q\r\9\r\z\j\j\s\g\t\0\c\n\9\i\8\j\w\t\u\4\b\f\v\y\l\d\5\e\c\o\z\g\t\k\c\w\e\c\l\b\a\0\n\e\e\d\c\3\w\1\f\t\y\n\2\i\w\v\n\0\h\u\6\b\3\k\8\j\h\v\3\6\1\s\9\o\w\v\v\w\e\o\v\v\t\3\5\o\5\8\7\m\w\u\i\i\o\u\v\x\c\y\i\9\r\8\z\e\0\t\2\n\c\i\y\m\1\l\s\w\q\t\h\a\i\d\n\3\y\8\2\f\j\w\2\w\2\7\3\i\n\3\n\9\l\m\h\i\8\r\v\l\o\c\m\7\7\l\w\t\x\g\v\1\z\s\2\r\1\h\j\w\a\g\n\j\a\3\z\k\x\q\1\i\6\8\1\l\8\r\5\w\f\v\2\h\s\8\7\2\z\h\z\x\e\6\m\e\z\b\6\1\l\9\l\y\0\y\a\a\r\n\b\7\0\u\7\q\r\g\8\l\d\7\2\w\s\t\7\1\k\4\8\7\a\8\s\t\v\b\0\p\p\l\6\q\u\5\4\e\a\h\5\d\a\f\9\1\s\h\v\w\a\g\p\k\j\t\v\3\u\e\u\1\t\4\3\a\6\t\s\d\0\n\z\5\d\i\j\d\q\l\q\4\7\h\n\3\3\g\s\a\z\3\l\w\v\0\4\z\5\a\m\u\g\o\f\6\l\w\b\o\1\s\i\u\1\3\c\f\o\s\r\4\0\h\a\0\h\0\k\s\s\7\u\s\u\y\c\g\h\l\6\u\3\k\h\4\i ]] 00:06:55.269 01:48:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:55.269 01:48:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:55.528 [2024-11-19 01:48:05.932997] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:55.528 [2024-11-19 01:48:05.933106] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72590 ] 00:06:55.528 [2024-11-19 01:48:06.083550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.528 [2024-11-19 01:48:06.107405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.528 [2024-11-19 01:48:06.140071] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.786  [2024-11-19T01:48:06.401Z] Copying: 512/512 [B] (average 500 kBps) 00:06:55.786 00:06:55.787 01:48:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ hq5tn0udpj0srq3x0g0o04vxd8q5kn3gahjuq7ecehtvxqyl5vbbbahic04sohukeyfuhtnivq5juv01yhbre5exqrqp55ty5p48ufqe81wbh75cgewpwbrnz1quqccvilh56rcq7qvn12qr9rzjjsgt0cn9i8jwtu4bfvyld5ecozgtkcweclba0needc3w1ftyn2iwvn0hu6b3k8jhv361s9owvvweovvt35o587mwuiiouvxcyi9r8ze0t2nciym1lswqthaidn3y82fjw2w273in3n9lmhi8rvlocm77lwtxgv1zs2r1hjwagnja3zkxq1i681l8r5wfv2hs872zhzxe6mezb61l9ly0yaarnb70u7qrg8ld72wst71k487a8stvb0ppl6qu54eah5daf91shvwagpkjtv3ueu1t43a6tsd0nz5dijdqlq47hn33gsaz3lwv04z5amugof6lwbo1siu13cfosr40ha0h0kss7usuycghl6u3kh4i == \h\q\5\t\n\0\u\d\p\j\0\s\r\q\3\x\0\g\0\o\0\4\v\x\d\8\q\5\k\n\3\g\a\h\j\u\q\7\e\c\e\h\t\v\x\q\y\l\5\v\b\b\b\a\h\i\c\0\4\s\o\h\u\k\e\y\f\u\h\t\n\i\v\q\5\j\u\v\0\1\y\h\b\r\e\5\e\x\q\r\q\p\5\5\t\y\5\p\4\8\u\f\q\e\8\1\w\b\h\7\5\c\g\e\w\p\w\b\r\n\z\1\q\u\q\c\c\v\i\l\h\5\6\r\c\q\7\q\v\n\1\2\q\r\9\r\z\j\j\s\g\t\0\c\n\9\i\8\j\w\t\u\4\b\f\v\y\l\d\5\e\c\o\z\g\t\k\c\w\e\c\l\b\a\0\n\e\e\d\c\3\w\1\f\t\y\n\2\i\w\v\n\0\h\u\6\b\3\k\8\j\h\v\3\6\1\s\9\o\w\v\v\w\e\o\v\v\t\3\5\o\5\8\7\m\w\u\i\i\o\u\v\x\c\y\i\9\r\8\z\e\0\t\2\n\c\i\y\m\1\l\s\w\q\t\h\a\i\d\n\3\y\8\2\f\j\w\2\w\2\7\3\i\n\3\n\9\l\m\h\i\8\r\v\l\o\c\m\7\7\l\w\t\x\g\v\1\z\s\2\r\1\h\j\w\a\g\n\j\a\3\z\k\x\q\1\i\6\8\1\l\8\r\5\w\f\v\2\h\s\8\7\2\z\h\z\x\e\6\m\e\z\b\6\1\l\9\l\y\0\y\a\a\r\n\b\7\0\u\7\q\r\g\8\l\d\7\2\w\s\t\7\1\k\4\8\7\a\8\s\t\v\b\0\p\p\l\6\q\u\5\4\e\a\h\5\d\a\f\9\1\s\h\v\w\a\g\p\k\j\t\v\3\u\e\u\1\t\4\3\a\6\t\s\d\0\n\z\5\d\i\j\d\q\l\q\4\7\h\n\3\3\g\s\a\z\3\l\w\v\0\4\z\5\a\m\u\g\o\f\6\l\w\b\o\1\s\i\u\1\3\c\f\o\s\r\4\0\h\a\0\h\0\k\s\s\7\u\s\u\y\c\g\h\l\6\u\3\k\h\4\i ]] 00:06:55.787 01:48:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:55.787 01:48:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:55.787 [2024-11-19 01:48:06.336920] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:55.787 [2024-11-19 01:48:06.337023] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72603 ] 00:06:56.045 [2024-11-19 01:48:06.486991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.045 [2024-11-19 01:48:06.510539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.045 [2024-11-19 01:48:06.543146] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.045  [2024-11-19T01:48:06.919Z] Copying: 512/512 [B] (average 250 kBps) 00:06:56.304 00:06:56.305 01:48:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ hq5tn0udpj0srq3x0g0o04vxd8q5kn3gahjuq7ecehtvxqyl5vbbbahic04sohukeyfuhtnivq5juv01yhbre5exqrqp55ty5p48ufqe81wbh75cgewpwbrnz1quqccvilh56rcq7qvn12qr9rzjjsgt0cn9i8jwtu4bfvyld5ecozgtkcweclba0needc3w1ftyn2iwvn0hu6b3k8jhv361s9owvvweovvt35o587mwuiiouvxcyi9r8ze0t2nciym1lswqthaidn3y82fjw2w273in3n9lmhi8rvlocm77lwtxgv1zs2r1hjwagnja3zkxq1i681l8r5wfv2hs872zhzxe6mezb61l9ly0yaarnb70u7qrg8ld72wst71k487a8stvb0ppl6qu54eah5daf91shvwagpkjtv3ueu1t43a6tsd0nz5dijdqlq47hn33gsaz3lwv04z5amugof6lwbo1siu13cfosr40ha0h0kss7usuycghl6u3kh4i == \h\q\5\t\n\0\u\d\p\j\0\s\r\q\3\x\0\g\0\o\0\4\v\x\d\8\q\5\k\n\3\g\a\h\j\u\q\7\e\c\e\h\t\v\x\q\y\l\5\v\b\b\b\a\h\i\c\0\4\s\o\h\u\k\e\y\f\u\h\t\n\i\v\q\5\j\u\v\0\1\y\h\b\r\e\5\e\x\q\r\q\p\5\5\t\y\5\p\4\8\u\f\q\e\8\1\w\b\h\7\5\c\g\e\w\p\w\b\r\n\z\1\q\u\q\c\c\v\i\l\h\5\6\r\c\q\7\q\v\n\1\2\q\r\9\r\z\j\j\s\g\t\0\c\n\9\i\8\j\w\t\u\4\b\f\v\y\l\d\5\e\c\o\z\g\t\k\c\w\e\c\l\b\a\0\n\e\e\d\c\3\w\1\f\t\y\n\2\i\w\v\n\0\h\u\6\b\3\k\8\j\h\v\3\6\1\s\9\o\w\v\v\w\e\o\v\v\t\3\5\o\5\8\7\m\w\u\i\i\o\u\v\x\c\y\i\9\r\8\z\e\0\t\2\n\c\i\y\m\1\l\s\w\q\t\h\a\i\d\n\3\y\8\2\f\j\w\2\w\2\7\3\i\n\3\n\9\l\m\h\i\8\r\v\l\o\c\m\7\7\l\w\t\x\g\v\1\z\s\2\r\1\h\j\w\a\g\n\j\a\3\z\k\x\q\1\i\6\8\1\l\8\r\5\w\f\v\2\h\s\8\7\2\z\h\z\x\e\6\m\e\z\b\6\1\l\9\l\y\0\y\a\a\r\n\b\7\0\u\7\q\r\g\8\l\d\7\2\w\s\t\7\1\k\4\8\7\a\8\s\t\v\b\0\p\p\l\6\q\u\5\4\e\a\h\5\d\a\f\9\1\s\h\v\w\a\g\p\k\j\t\v\3\u\e\u\1\t\4\3\a\6\t\s\d\0\n\z\5\d\i\j\d\q\l\q\4\7\h\n\3\3\g\s\a\z\3\l\w\v\0\4\z\5\a\m\u\g\o\f\6\l\w\b\o\1\s\i\u\1\3\c\f\o\s\r\4\0\h\a\0\h\0\k\s\s\7\u\s\u\y\c\g\h\l\6\u\3\k\h\4\i ]] 00:06:56.305 01:48:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:56.305 01:48:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:56.305 [2024-11-19 01:48:06.755259] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:56.305 [2024-11-19 01:48:06.755353] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72605 ] 00:06:56.305 [2024-11-19 01:48:06.903668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.565 [2024-11-19 01:48:06.927114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.565 [2024-11-19 01:48:06.959050] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.565  [2024-11-19T01:48:07.180Z] Copying: 512/512 [B] (average 500 kBps) 00:06:56.565 00:06:56.565 01:48:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ hq5tn0udpj0srq3x0g0o04vxd8q5kn3gahjuq7ecehtvxqyl5vbbbahic04sohukeyfuhtnivq5juv01yhbre5exqrqp55ty5p48ufqe81wbh75cgewpwbrnz1quqccvilh56rcq7qvn12qr9rzjjsgt0cn9i8jwtu4bfvyld5ecozgtkcweclba0needc3w1ftyn2iwvn0hu6b3k8jhv361s9owvvweovvt35o587mwuiiouvxcyi9r8ze0t2nciym1lswqthaidn3y82fjw2w273in3n9lmhi8rvlocm77lwtxgv1zs2r1hjwagnja3zkxq1i681l8r5wfv2hs872zhzxe6mezb61l9ly0yaarnb70u7qrg8ld72wst71k487a8stvb0ppl6qu54eah5daf91shvwagpkjtv3ueu1t43a6tsd0nz5dijdqlq47hn33gsaz3lwv04z5amugof6lwbo1siu13cfosr40ha0h0kss7usuycghl6u3kh4i == \h\q\5\t\n\0\u\d\p\j\0\s\r\q\3\x\0\g\0\o\0\4\v\x\d\8\q\5\k\n\3\g\a\h\j\u\q\7\e\c\e\h\t\v\x\q\y\l\5\v\b\b\b\a\h\i\c\0\4\s\o\h\u\k\e\y\f\u\h\t\n\i\v\q\5\j\u\v\0\1\y\h\b\r\e\5\e\x\q\r\q\p\5\5\t\y\5\p\4\8\u\f\q\e\8\1\w\b\h\7\5\c\g\e\w\p\w\b\r\n\z\1\q\u\q\c\c\v\i\l\h\5\6\r\c\q\7\q\v\n\1\2\q\r\9\r\z\j\j\s\g\t\0\c\n\9\i\8\j\w\t\u\4\b\f\v\y\l\d\5\e\c\o\z\g\t\k\c\w\e\c\l\b\a\0\n\e\e\d\c\3\w\1\f\t\y\n\2\i\w\v\n\0\h\u\6\b\3\k\8\j\h\v\3\6\1\s\9\o\w\v\v\w\e\o\v\v\t\3\5\o\5\8\7\m\w\u\i\i\o\u\v\x\c\y\i\9\r\8\z\e\0\t\2\n\c\i\y\m\1\l\s\w\q\t\h\a\i\d\n\3\y\8\2\f\j\w\2\w\2\7\3\i\n\3\n\9\l\m\h\i\8\r\v\l\o\c\m\7\7\l\w\t\x\g\v\1\z\s\2\r\1\h\j\w\a\g\n\j\a\3\z\k\x\q\1\i\6\8\1\l\8\r\5\w\f\v\2\h\s\8\7\2\z\h\z\x\e\6\m\e\z\b\6\1\l\9\l\y\0\y\a\a\r\n\b\7\0\u\7\q\r\g\8\l\d\7\2\w\s\t\7\1\k\4\8\7\a\8\s\t\v\b\0\p\p\l\6\q\u\5\4\e\a\h\5\d\a\f\9\1\s\h\v\w\a\g\p\k\j\t\v\3\u\e\u\1\t\4\3\a\6\t\s\d\0\n\z\5\d\i\j\d\q\l\q\4\7\h\n\3\3\g\s\a\z\3\l\w\v\0\4\z\5\a\m\u\g\o\f\6\l\w\b\o\1\s\i\u\1\3\c\f\o\s\r\4\0\h\a\0\h\0\k\s\s\7\u\s\u\y\c\g\h\l\6\u\3\k\h\4\i ]] 00:06:56.565 01:48:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:56.565 01:48:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:56.565 01:48:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:56.565 01:48:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:56.565 01:48:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:56.565 01:48:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:56.565 [2024-11-19 01:48:07.162607] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:56.565 [2024-11-19 01:48:07.162748] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72607 ] 00:06:56.824 [2024-11-19 01:48:07.309229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.824 [2024-11-19 01:48:07.327968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.824 [2024-11-19 01:48:07.353983] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.824  [2024-11-19T01:48:07.697Z] Copying: 512/512 [B] (average 500 kBps) 00:06:57.082 00:06:57.082 01:48:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ul6ezojgubuqzixs4tgocsl5o7f4xvzd0nznyfkosa936nv5k95mqptimjqzho7lxy75w6qw2diallzl8uv2w29sl1za55clzwln1jhe3yc8vcigcdni7d5r7rsw3vua3n1xabo4lzishsm11bspmo0r1aue9txg4ippbi1rxdqk4afw1oquyyw9ibohvscvbwkn6oiiuzjt9jc7ssgwko9nvpyirnzvx8577k7pxutsge7gbhkfuj0lzf87yo24ek6grcyhwgfho9sfegci6xr90prhpr2qi8afzm0yqo447kmx4qjz8so2pr34q5vhx727n7wrqnh0w61dvdgbfqfmmkso3t63c34vkvhpd48zbzv9ytzrtilmo7jqddftn49eb06t7g4adc8bt76cr3z4mws079shsmwuaem94wsqilgdl0bi4htzmz4txdfwcrm7ev2hum0z2c6k2vnv9el140edp6x4xevihs08ovpz62k88oq7j1hvnivxncld == \u\l\6\e\z\o\j\g\u\b\u\q\z\i\x\s\4\t\g\o\c\s\l\5\o\7\f\4\x\v\z\d\0\n\z\n\y\f\k\o\s\a\9\3\6\n\v\5\k\9\5\m\q\p\t\i\m\j\q\z\h\o\7\l\x\y\7\5\w\6\q\w\2\d\i\a\l\l\z\l\8\u\v\2\w\2\9\s\l\1\z\a\5\5\c\l\z\w\l\n\1\j\h\e\3\y\c\8\v\c\i\g\c\d\n\i\7\d\5\r\7\r\s\w\3\v\u\a\3\n\1\x\a\b\o\4\l\z\i\s\h\s\m\1\1\b\s\p\m\o\0\r\1\a\u\e\9\t\x\g\4\i\p\p\b\i\1\r\x\d\q\k\4\a\f\w\1\o\q\u\y\y\w\9\i\b\o\h\v\s\c\v\b\w\k\n\6\o\i\i\u\z\j\t\9\j\c\7\s\s\g\w\k\o\9\n\v\p\y\i\r\n\z\v\x\8\5\7\7\k\7\p\x\u\t\s\g\e\7\g\b\h\k\f\u\j\0\l\z\f\8\7\y\o\2\4\e\k\6\g\r\c\y\h\w\g\f\h\o\9\s\f\e\g\c\i\6\x\r\9\0\p\r\h\p\r\2\q\i\8\a\f\z\m\0\y\q\o\4\4\7\k\m\x\4\q\j\z\8\s\o\2\p\r\3\4\q\5\v\h\x\7\2\7\n\7\w\r\q\n\h\0\w\6\1\d\v\d\g\b\f\q\f\m\m\k\s\o\3\t\6\3\c\3\4\v\k\v\h\p\d\4\8\z\b\z\v\9\y\t\z\r\t\i\l\m\o\7\j\q\d\d\f\t\n\4\9\e\b\0\6\t\7\g\4\a\d\c\8\b\t\7\6\c\r\3\z\4\m\w\s\0\7\9\s\h\s\m\w\u\a\e\m\9\4\w\s\q\i\l\g\d\l\0\b\i\4\h\t\z\m\z\4\t\x\d\f\w\c\r\m\7\e\v\2\h\u\m\0\z\2\c\6\k\2\v\n\v\9\e\l\1\4\0\e\d\p\6\x\4\x\e\v\i\h\s\0\8\o\v\p\z\6\2\k\8\8\o\q\7\j\1\h\v\n\i\v\x\n\c\l\d ]] 00:06:57.082 01:48:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:57.082 01:48:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:57.082 [2024-11-19 01:48:07.549018] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:57.082 [2024-11-19 01:48:07.549112] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72620 ] 00:06:57.082 [2024-11-19 01:48:07.692084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.341 [2024-11-19 01:48:07.711379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.341 [2024-11-19 01:48:07.737260] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:57.341  [2024-11-19T01:48:07.956Z] Copying: 512/512 [B] (average 500 kBps) 00:06:57.341 00:06:57.341 01:48:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ul6ezojgubuqzixs4tgocsl5o7f4xvzd0nznyfkosa936nv5k95mqptimjqzho7lxy75w6qw2diallzl8uv2w29sl1za55clzwln1jhe3yc8vcigcdni7d5r7rsw3vua3n1xabo4lzishsm11bspmo0r1aue9txg4ippbi1rxdqk4afw1oquyyw9ibohvscvbwkn6oiiuzjt9jc7ssgwko9nvpyirnzvx8577k7pxutsge7gbhkfuj0lzf87yo24ek6grcyhwgfho9sfegci6xr90prhpr2qi8afzm0yqo447kmx4qjz8so2pr34q5vhx727n7wrqnh0w61dvdgbfqfmmkso3t63c34vkvhpd48zbzv9ytzrtilmo7jqddftn49eb06t7g4adc8bt76cr3z4mws079shsmwuaem94wsqilgdl0bi4htzmz4txdfwcrm7ev2hum0z2c6k2vnv9el140edp6x4xevihs08ovpz62k88oq7j1hvnivxncld == \u\l\6\e\z\o\j\g\u\b\u\q\z\i\x\s\4\t\g\o\c\s\l\5\o\7\f\4\x\v\z\d\0\n\z\n\y\f\k\o\s\a\9\3\6\n\v\5\k\9\5\m\q\p\t\i\m\j\q\z\h\o\7\l\x\y\7\5\w\6\q\w\2\d\i\a\l\l\z\l\8\u\v\2\w\2\9\s\l\1\z\a\5\5\c\l\z\w\l\n\1\j\h\e\3\y\c\8\v\c\i\g\c\d\n\i\7\d\5\r\7\r\s\w\3\v\u\a\3\n\1\x\a\b\o\4\l\z\i\s\h\s\m\1\1\b\s\p\m\o\0\r\1\a\u\e\9\t\x\g\4\i\p\p\b\i\1\r\x\d\q\k\4\a\f\w\1\o\q\u\y\y\w\9\i\b\o\h\v\s\c\v\b\w\k\n\6\o\i\i\u\z\j\t\9\j\c\7\s\s\g\w\k\o\9\n\v\p\y\i\r\n\z\v\x\8\5\7\7\k\7\p\x\u\t\s\g\e\7\g\b\h\k\f\u\j\0\l\z\f\8\7\y\o\2\4\e\k\6\g\r\c\y\h\w\g\f\h\o\9\s\f\e\g\c\i\6\x\r\9\0\p\r\h\p\r\2\q\i\8\a\f\z\m\0\y\q\o\4\4\7\k\m\x\4\q\j\z\8\s\o\2\p\r\3\4\q\5\v\h\x\7\2\7\n\7\w\r\q\n\h\0\w\6\1\d\v\d\g\b\f\q\f\m\m\k\s\o\3\t\6\3\c\3\4\v\k\v\h\p\d\4\8\z\b\z\v\9\y\t\z\r\t\i\l\m\o\7\j\q\d\d\f\t\n\4\9\e\b\0\6\t\7\g\4\a\d\c\8\b\t\7\6\c\r\3\z\4\m\w\s\0\7\9\s\h\s\m\w\u\a\e\m\9\4\w\s\q\i\l\g\d\l\0\b\i\4\h\t\z\m\z\4\t\x\d\f\w\c\r\m\7\e\v\2\h\u\m\0\z\2\c\6\k\2\v\n\v\9\e\l\1\4\0\e\d\p\6\x\4\x\e\v\i\h\s\0\8\o\v\p\z\6\2\k\8\8\o\q\7\j\1\h\v\n\i\v\x\n\c\l\d ]] 00:06:57.341 01:48:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:57.341 01:48:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:57.341 [2024-11-19 01:48:07.929602] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:57.341 [2024-11-19 01:48:07.929708] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72624 ] 00:06:57.600 [2024-11-19 01:48:08.074174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.600 [2024-11-19 01:48:08.093366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.600 [2024-11-19 01:48:08.123115] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:57.600  [2024-11-19T01:48:08.474Z] Copying: 512/512 [B] (average 250 kBps) 00:06:57.859 00:06:57.859 01:48:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ul6ezojgubuqzixs4tgocsl5o7f4xvzd0nznyfkosa936nv5k95mqptimjqzho7lxy75w6qw2diallzl8uv2w29sl1za55clzwln1jhe3yc8vcigcdni7d5r7rsw3vua3n1xabo4lzishsm11bspmo0r1aue9txg4ippbi1rxdqk4afw1oquyyw9ibohvscvbwkn6oiiuzjt9jc7ssgwko9nvpyirnzvx8577k7pxutsge7gbhkfuj0lzf87yo24ek6grcyhwgfho9sfegci6xr90prhpr2qi8afzm0yqo447kmx4qjz8so2pr34q5vhx727n7wrqnh0w61dvdgbfqfmmkso3t63c34vkvhpd48zbzv9ytzrtilmo7jqddftn49eb06t7g4adc8bt76cr3z4mws079shsmwuaem94wsqilgdl0bi4htzmz4txdfwcrm7ev2hum0z2c6k2vnv9el140edp6x4xevihs08ovpz62k88oq7j1hvnivxncld == \u\l\6\e\z\o\j\g\u\b\u\q\z\i\x\s\4\t\g\o\c\s\l\5\o\7\f\4\x\v\z\d\0\n\z\n\y\f\k\o\s\a\9\3\6\n\v\5\k\9\5\m\q\p\t\i\m\j\q\z\h\o\7\l\x\y\7\5\w\6\q\w\2\d\i\a\l\l\z\l\8\u\v\2\w\2\9\s\l\1\z\a\5\5\c\l\z\w\l\n\1\j\h\e\3\y\c\8\v\c\i\g\c\d\n\i\7\d\5\r\7\r\s\w\3\v\u\a\3\n\1\x\a\b\o\4\l\z\i\s\h\s\m\1\1\b\s\p\m\o\0\r\1\a\u\e\9\t\x\g\4\i\p\p\b\i\1\r\x\d\q\k\4\a\f\w\1\o\q\u\y\y\w\9\i\b\o\h\v\s\c\v\b\w\k\n\6\o\i\i\u\z\j\t\9\j\c\7\s\s\g\w\k\o\9\n\v\p\y\i\r\n\z\v\x\8\5\7\7\k\7\p\x\u\t\s\g\e\7\g\b\h\k\f\u\j\0\l\z\f\8\7\y\o\2\4\e\k\6\g\r\c\y\h\w\g\f\h\o\9\s\f\e\g\c\i\6\x\r\9\0\p\r\h\p\r\2\q\i\8\a\f\z\m\0\y\q\o\4\4\7\k\m\x\4\q\j\z\8\s\o\2\p\r\3\4\q\5\v\h\x\7\2\7\n\7\w\r\q\n\h\0\w\6\1\d\v\d\g\b\f\q\f\m\m\k\s\o\3\t\6\3\c\3\4\v\k\v\h\p\d\4\8\z\b\z\v\9\y\t\z\r\t\i\l\m\o\7\j\q\d\d\f\t\n\4\9\e\b\0\6\t\7\g\4\a\d\c\8\b\t\7\6\c\r\3\z\4\m\w\s\0\7\9\s\h\s\m\w\u\a\e\m\9\4\w\s\q\i\l\g\d\l\0\b\i\4\h\t\z\m\z\4\t\x\d\f\w\c\r\m\7\e\v\2\h\u\m\0\z\2\c\6\k\2\v\n\v\9\e\l\1\4\0\e\d\p\6\x\4\x\e\v\i\h\s\0\8\o\v\p\z\6\2\k\8\8\o\q\7\j\1\h\v\n\i\v\x\n\c\l\d ]] 00:06:57.859 01:48:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:57.859 01:48:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:57.859 [2024-11-19 01:48:08.311979] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:57.859 [2024-11-19 01:48:08.312080] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72626 ] 00:06:57.859 [2024-11-19 01:48:08.457390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.859 [2024-11-19 01:48:08.474906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.118 [2024-11-19 01:48:08.501582] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:58.118  [2024-11-19T01:48:08.733Z] Copying: 512/512 [B] (average 500 kBps) 00:06:58.118 00:06:58.118 01:48:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ul6ezojgubuqzixs4tgocsl5o7f4xvzd0nznyfkosa936nv5k95mqptimjqzho7lxy75w6qw2diallzl8uv2w29sl1za55clzwln1jhe3yc8vcigcdni7d5r7rsw3vua3n1xabo4lzishsm11bspmo0r1aue9txg4ippbi1rxdqk4afw1oquyyw9ibohvscvbwkn6oiiuzjt9jc7ssgwko9nvpyirnzvx8577k7pxutsge7gbhkfuj0lzf87yo24ek6grcyhwgfho9sfegci6xr90prhpr2qi8afzm0yqo447kmx4qjz8so2pr34q5vhx727n7wrqnh0w61dvdgbfqfmmkso3t63c34vkvhpd48zbzv9ytzrtilmo7jqddftn49eb06t7g4adc8bt76cr3z4mws079shsmwuaem94wsqilgdl0bi4htzmz4txdfwcrm7ev2hum0z2c6k2vnv9el140edp6x4xevihs08ovpz62k88oq7j1hvnivxncld == \u\l\6\e\z\o\j\g\u\b\u\q\z\i\x\s\4\t\g\o\c\s\l\5\o\7\f\4\x\v\z\d\0\n\z\n\y\f\k\o\s\a\9\3\6\n\v\5\k\9\5\m\q\p\t\i\m\j\q\z\h\o\7\l\x\y\7\5\w\6\q\w\2\d\i\a\l\l\z\l\8\u\v\2\w\2\9\s\l\1\z\a\5\5\c\l\z\w\l\n\1\j\h\e\3\y\c\8\v\c\i\g\c\d\n\i\7\d\5\r\7\r\s\w\3\v\u\a\3\n\1\x\a\b\o\4\l\z\i\s\h\s\m\1\1\b\s\p\m\o\0\r\1\a\u\e\9\t\x\g\4\i\p\p\b\i\1\r\x\d\q\k\4\a\f\w\1\o\q\u\y\y\w\9\i\b\o\h\v\s\c\v\b\w\k\n\6\o\i\i\u\z\j\t\9\j\c\7\s\s\g\w\k\o\9\n\v\p\y\i\r\n\z\v\x\8\5\7\7\k\7\p\x\u\t\s\g\e\7\g\b\h\k\f\u\j\0\l\z\f\8\7\y\o\2\4\e\k\6\g\r\c\y\h\w\g\f\h\o\9\s\f\e\g\c\i\6\x\r\9\0\p\r\h\p\r\2\q\i\8\a\f\z\m\0\y\q\o\4\4\7\k\m\x\4\q\j\z\8\s\o\2\p\r\3\4\q\5\v\h\x\7\2\7\n\7\w\r\q\n\h\0\w\6\1\d\v\d\g\b\f\q\f\m\m\k\s\o\3\t\6\3\c\3\4\v\k\v\h\p\d\4\8\z\b\z\v\9\y\t\z\r\t\i\l\m\o\7\j\q\d\d\f\t\n\4\9\e\b\0\6\t\7\g\4\a\d\c\8\b\t\7\6\c\r\3\z\4\m\w\s\0\7\9\s\h\s\m\w\u\a\e\m\9\4\w\s\q\i\l\g\d\l\0\b\i\4\h\t\z\m\z\4\t\x\d\f\w\c\r\m\7\e\v\2\h\u\m\0\z\2\c\6\k\2\v\n\v\9\e\l\1\4\0\e\d\p\6\x\4\x\e\v\i\h\s\0\8\o\v\p\z\6\2\k\8\8\o\q\7\j\1\h\v\n\i\v\x\n\c\l\d ]] 00:06:58.118 00:06:58.118 real 0m3.188s 00:06:58.118 user 0m1.535s 00:06:58.118 sys 0m0.696s 00:06:58.118 01:48:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.118 ************************************ 00:06:58.118 END TEST dd_flags_misc_forced_aio 00:06:58.118 01:48:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:58.118 ************************************ 00:06:58.118 01:48:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:06:58.118 01:48:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:58.118 01:48:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:58.118 00:06:58.118 real 0m14.730s 00:06:58.118 user 0m5.957s 00:06:58.118 sys 0m4.043s 00:06:58.118 01:48:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.118 01:48:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:58.118 ************************************ 00:06:58.118 END TEST spdk_dd_posix 00:06:58.118 ************************************ 00:06:58.378 01:48:08 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:58.378 01:48:08 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.378 01:48:08 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.378 01:48:08 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:58.378 ************************************ 00:06:58.378 START TEST spdk_dd_malloc 00:06:58.378 ************************************ 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:58.378 * Looking for test storage... 00:06:58.378 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:58.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.378 --rc genhtml_branch_coverage=1 00:06:58.378 --rc genhtml_function_coverage=1 00:06:58.378 --rc genhtml_legend=1 00:06:58.378 --rc geninfo_all_blocks=1 00:06:58.378 --rc geninfo_unexecuted_blocks=1 00:06:58.378 00:06:58.378 ' 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:58.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.378 --rc genhtml_branch_coverage=1 00:06:58.378 --rc genhtml_function_coverage=1 00:06:58.378 --rc genhtml_legend=1 00:06:58.378 --rc geninfo_all_blocks=1 00:06:58.378 --rc geninfo_unexecuted_blocks=1 00:06:58.378 00:06:58.378 ' 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:58.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.378 --rc genhtml_branch_coverage=1 00:06:58.378 --rc genhtml_function_coverage=1 00:06:58.378 --rc genhtml_legend=1 00:06:58.378 --rc geninfo_all_blocks=1 00:06:58.378 --rc geninfo_unexecuted_blocks=1 00:06:58.378 00:06:58.378 ' 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:58.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.378 --rc genhtml_branch_coverage=1 00:06:58.378 --rc genhtml_function_coverage=1 00:06:58.378 --rc genhtml_legend=1 00:06:58.378 --rc geninfo_all_blocks=1 00:06:58.378 --rc geninfo_unexecuted_blocks=1 00:06:58.378 00:06:58.378 ' 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:06:58.378 01:48:08 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.379 01:48:08 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:58.379 01:48:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.379 01:48:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.379 01:48:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:58.379 ************************************ 00:06:58.379 START TEST dd_malloc_copy 00:06:58.379 ************************************ 00:06:58.379 01:48:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:06:58.379 01:48:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:58.379 01:48:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:58.379 01:48:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:58.379 01:48:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:58.379 01:48:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:58.379 01:48:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:58.379 01:48:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:58.379 01:48:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:06:58.379 01:48:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:58.379 01:48:08 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:58.638 [2024-11-19 01:48:08.997791] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:06:58.638 [2024-11-19 01:48:08.997938] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72708 ] 00:06:58.638 { 00:06:58.638 "subsystems": [ 00:06:58.638 { 00:06:58.638 "subsystem": "bdev", 00:06:58.638 "config": [ 00:06:58.638 { 00:06:58.638 "params": { 00:06:58.638 "block_size": 512, 00:06:58.638 "num_blocks": 1048576, 00:06:58.638 "name": "malloc0" 00:06:58.638 }, 00:06:58.638 "method": "bdev_malloc_create" 00:06:58.638 }, 00:06:58.638 { 00:06:58.638 "params": { 00:06:58.638 "block_size": 512, 00:06:58.638 "num_blocks": 1048576, 00:06:58.638 "name": "malloc1" 00:06:58.638 }, 00:06:58.638 "method": "bdev_malloc_create" 00:06:58.638 }, 00:06:58.638 { 00:06:58.638 "method": "bdev_wait_for_examine" 00:06:58.638 } 00:06:58.638 ] 00:06:58.638 } 00:06:58.638 ] 00:06:58.638 } 00:06:58.638 [2024-11-19 01:48:09.144781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.638 [2024-11-19 01:48:09.163695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.638 [2024-11-19 01:48:09.190081] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:00.015  [2024-11-19T01:48:11.567Z] Copying: 238/512 [MB] (238 MBps) [2024-11-19T01:48:11.567Z] Copying: 476/512 [MB] (238 MBps) [2024-11-19T01:48:11.826Z] Copying: 512/512 [MB] (average 237 MBps) 00:07:01.211 00:07:01.211 01:48:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:07:01.211 01:48:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:01.211 01:48:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:01.211 01:48:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:01.471 [2024-11-19 01:48:11.876568] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:01.471 [2024-11-19 01:48:11.876667] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72750 ] 00:07:01.471 { 00:07:01.471 "subsystems": [ 00:07:01.471 { 00:07:01.471 "subsystem": "bdev", 00:07:01.471 "config": [ 00:07:01.471 { 00:07:01.471 "params": { 00:07:01.471 "block_size": 512, 00:07:01.471 "num_blocks": 1048576, 00:07:01.471 "name": "malloc0" 00:07:01.471 }, 00:07:01.471 "method": "bdev_malloc_create" 00:07:01.471 }, 00:07:01.471 { 00:07:01.471 "params": { 00:07:01.471 "block_size": 512, 00:07:01.471 "num_blocks": 1048576, 00:07:01.471 "name": "malloc1" 00:07:01.471 }, 00:07:01.471 "method": "bdev_malloc_create" 00:07:01.471 }, 00:07:01.471 { 00:07:01.471 "method": "bdev_wait_for_examine" 00:07:01.471 } 00:07:01.471 ] 00:07:01.471 } 00:07:01.471 ] 00:07:01.471 } 00:07:01.471 [2024-11-19 01:48:12.021561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.471 [2024-11-19 01:48:12.039096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.471 [2024-11-19 01:48:12.065328] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:02.848  [2024-11-19T01:48:14.400Z] Copying: 239/512 [MB] (239 MBps) [2024-11-19T01:48:14.400Z] Copying: 479/512 [MB] (240 MBps) [2024-11-19T01:48:14.969Z] Copying: 512/512 [MB] (average 240 MBps) 00:07:04.354 00:07:04.354 00:07:04.354 real 0m5.727s 00:07:04.354 user 0m5.130s 00:07:04.354 sys 0m0.451s 00:07:04.354 ************************************ 00:07:04.354 END TEST dd_malloc_copy 00:07:04.354 ************************************ 00:07:04.354 01:48:14 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.354 01:48:14 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:04.354 00:07:04.354 real 0m5.964s 00:07:04.354 user 0m5.264s 00:07:04.354 sys 0m0.561s 00:07:04.354 01:48:14 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.354 01:48:14 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:04.354 ************************************ 00:07:04.354 END TEST spdk_dd_malloc 00:07:04.354 ************************************ 00:07:04.354 01:48:14 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:04.354 01:48:14 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:04.354 01:48:14 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.354 01:48:14 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:04.354 ************************************ 00:07:04.354 START TEST spdk_dd_bdev_to_bdev 00:07:04.354 ************************************ 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:04.354 * Looking for test storage... 00:07:04.354 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lcov --version 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:04.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.354 --rc genhtml_branch_coverage=1 00:07:04.354 --rc genhtml_function_coverage=1 00:07:04.354 --rc genhtml_legend=1 00:07:04.354 --rc geninfo_all_blocks=1 00:07:04.354 --rc geninfo_unexecuted_blocks=1 00:07:04.354 00:07:04.354 ' 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:04.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.354 --rc genhtml_branch_coverage=1 00:07:04.354 --rc genhtml_function_coverage=1 00:07:04.354 --rc genhtml_legend=1 00:07:04.354 --rc geninfo_all_blocks=1 00:07:04.354 --rc geninfo_unexecuted_blocks=1 00:07:04.354 00:07:04.354 ' 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:04.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.354 --rc genhtml_branch_coverage=1 00:07:04.354 --rc genhtml_function_coverage=1 00:07:04.354 --rc genhtml_legend=1 00:07:04.354 --rc geninfo_all_blocks=1 00:07:04.354 --rc geninfo_unexecuted_blocks=1 00:07:04.354 00:07:04.354 ' 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:04.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.354 --rc genhtml_branch_coverage=1 00:07:04.354 --rc genhtml_function_coverage=1 00:07:04.354 --rc genhtml_legend=1 00:07:04.354 --rc geninfo_all_blocks=1 00:07:04.354 --rc geninfo_unexecuted_blocks=1 00:07:04.354 00:07:04.354 ' 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.354 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.355 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.355 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.355 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.355 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.355 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:07:04.355 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.355 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:04.355 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:04.355 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:04.355 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:04.355 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:04.355 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:04.355 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:07:04.355 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:04.355 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:04.355 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:07:04.355 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:04.355 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:04.355 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:07:04.355 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:04.355 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:04.355 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:04.355 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:04.355 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:04.355 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:04.355 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:04.355 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.355 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:04.355 ************************************ 00:07:04.355 START TEST dd_inflate_file 00:07:04.355 ************************************ 00:07:04.355 01:48:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:04.614 [2024-11-19 01:48:15.000011] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:04.614 [2024-11-19 01:48:15.000109] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72857 ] 00:07:04.614 [2024-11-19 01:48:15.145014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.614 [2024-11-19 01:48:15.162533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.614 [2024-11-19 01:48:15.188420] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:04.872  [2024-11-19T01:48:15.487Z] Copying: 64/64 [MB] (average 1641 MBps) 00:07:04.872 00:07:04.872 00:07:04.872 real 0m0.408s 00:07:04.872 user 0m0.222s 00:07:04.872 sys 0m0.206s 00:07:04.872 01:48:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.872 01:48:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:07:04.872 ************************************ 00:07:04.872 END TEST dd_inflate_file 00:07:04.872 ************************************ 00:07:04.872 01:48:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:04.872 01:48:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:04.872 01:48:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:04.872 01:48:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:04.872 01:48:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:04.872 01:48:15 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:04.872 01:48:15 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.872 01:48:15 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:04.873 01:48:15 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:04.873 ************************************ 00:07:04.873 START TEST dd_copy_to_out_bdev 00:07:04.873 ************************************ 00:07:04.873 01:48:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:04.873 [2024-11-19 01:48:15.458578] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:04.873 [2024-11-19 01:48:15.458712] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72885 ] 00:07:04.873 { 00:07:04.873 "subsystems": [ 00:07:04.873 { 00:07:04.873 "subsystem": "bdev", 00:07:04.873 "config": [ 00:07:04.873 { 00:07:04.873 "params": { 00:07:04.873 "trtype": "pcie", 00:07:04.873 "traddr": "0000:00:10.0", 00:07:04.873 "name": "Nvme0" 00:07:04.873 }, 00:07:04.873 "method": "bdev_nvme_attach_controller" 00:07:04.873 }, 00:07:04.873 { 00:07:04.873 "params": { 00:07:04.873 "trtype": "pcie", 00:07:04.873 "traddr": "0000:00:11.0", 00:07:04.873 "name": "Nvme1" 00:07:04.873 }, 00:07:04.873 "method": "bdev_nvme_attach_controller" 00:07:04.873 }, 00:07:04.873 { 00:07:04.873 "method": "bdev_wait_for_examine" 00:07:04.873 } 00:07:04.873 ] 00:07:04.873 } 00:07:04.873 ] 00:07:04.873 } 00:07:05.131 [2024-11-19 01:48:15.601401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.132 [2024-11-19 01:48:15.621806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.132 [2024-11-19 01:48:15.649284] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:06.509  [2024-11-19T01:48:17.124Z] Copying: 51/64 [MB] (51 MBps) [2024-11-19T01:48:17.384Z] Copying: 64/64 [MB] (average 51 MBps) 00:07:06.769 00:07:06.769 00:07:06.769 real 0m1.755s 00:07:06.769 user 0m1.597s 00:07:06.769 sys 0m1.445s 00:07:06.769 01:48:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.769 ************************************ 00:07:06.769 END TEST dd_copy_to_out_bdev 00:07:06.769 ************************************ 00:07:06.769 01:48:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:06.769 01:48:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:06.769 01:48:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:06.769 01:48:17 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.769 01:48:17 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.769 01:48:17 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:06.769 ************************************ 00:07:06.769 START TEST dd_offset_magic 00:07:06.769 ************************************ 00:07:06.769 01:48:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:07:06.769 01:48:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:06.769 01:48:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:06.769 01:48:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:06.769 01:48:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:06.769 01:48:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:06.769 01:48:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:06.769 01:48:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:06.769 01:48:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:06.769 [2024-11-19 01:48:17.272712] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:06.769 [2024-11-19 01:48:17.272814] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72930 ] 00:07:06.769 { 00:07:06.769 "subsystems": [ 00:07:06.769 { 00:07:06.769 "subsystem": "bdev", 00:07:06.769 "config": [ 00:07:06.769 { 00:07:06.769 "params": { 00:07:06.769 "trtype": "pcie", 00:07:06.769 "traddr": "0000:00:10.0", 00:07:06.769 "name": "Nvme0" 00:07:06.769 }, 00:07:06.769 "method": "bdev_nvme_attach_controller" 00:07:06.769 }, 00:07:06.769 { 00:07:06.769 "params": { 00:07:06.769 "trtype": "pcie", 00:07:06.769 "traddr": "0000:00:11.0", 00:07:06.769 "name": "Nvme1" 00:07:06.769 }, 00:07:06.769 "method": "bdev_nvme_attach_controller" 00:07:06.769 }, 00:07:06.769 { 00:07:06.769 "method": "bdev_wait_for_examine" 00:07:06.769 } 00:07:06.769 ] 00:07:06.769 } 00:07:06.769 ] 00:07:06.769 } 00:07:07.074 [2024-11-19 01:48:17.410994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.074 [2024-11-19 01:48:17.432311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.074 [2024-11-19 01:48:17.463921] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:07.354  [2024-11-19T01:48:17.969Z] Copying: 65/65 [MB] (average 1015 MBps) 00:07:07.354 00:07:07.354 01:48:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:07.354 01:48:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:07.354 01:48:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:07.354 01:48:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:07.354 [2024-11-19 01:48:17.909732] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:07.354 [2024-11-19 01:48:17.909856] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72950 ] 00:07:07.354 { 00:07:07.354 "subsystems": [ 00:07:07.354 { 00:07:07.354 "subsystem": "bdev", 00:07:07.354 "config": [ 00:07:07.354 { 00:07:07.354 "params": { 00:07:07.354 "trtype": "pcie", 00:07:07.354 "traddr": "0000:00:10.0", 00:07:07.354 "name": "Nvme0" 00:07:07.354 }, 00:07:07.354 "method": "bdev_nvme_attach_controller" 00:07:07.354 }, 00:07:07.354 { 00:07:07.354 "params": { 00:07:07.354 "trtype": "pcie", 00:07:07.354 "traddr": "0000:00:11.0", 00:07:07.354 "name": "Nvme1" 00:07:07.354 }, 00:07:07.354 "method": "bdev_nvme_attach_controller" 00:07:07.354 }, 00:07:07.354 { 00:07:07.354 "method": "bdev_wait_for_examine" 00:07:07.354 } 00:07:07.354 ] 00:07:07.354 } 00:07:07.354 ] 00:07:07.354 } 00:07:07.613 [2024-11-19 01:48:18.054377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.613 [2024-11-19 01:48:18.073042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.613 [2024-11-19 01:48:18.102586] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:07.873  [2024-11-19T01:48:18.488Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:07.873 00:07:07.873 01:48:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:07.873 01:48:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:07.873 01:48:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:07.873 01:48:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:07.873 01:48:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:07.873 01:48:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:07.873 01:48:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:07.873 [2024-11-19 01:48:18.422066] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:07.873 [2024-11-19 01:48:18.422169] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72961 ] 00:07:07.873 { 00:07:07.873 "subsystems": [ 00:07:07.873 { 00:07:07.873 "subsystem": "bdev", 00:07:07.873 "config": [ 00:07:07.873 { 00:07:07.873 "params": { 00:07:07.873 "trtype": "pcie", 00:07:07.873 "traddr": "0000:00:10.0", 00:07:07.873 "name": "Nvme0" 00:07:07.873 }, 00:07:07.873 "method": "bdev_nvme_attach_controller" 00:07:07.873 }, 00:07:07.873 { 00:07:07.873 "params": { 00:07:07.873 "trtype": "pcie", 00:07:07.873 "traddr": "0000:00:11.0", 00:07:07.873 "name": "Nvme1" 00:07:07.873 }, 00:07:07.873 "method": "bdev_nvme_attach_controller" 00:07:07.873 }, 00:07:07.873 { 00:07:07.873 "method": "bdev_wait_for_examine" 00:07:07.873 } 00:07:07.873 ] 00:07:07.873 } 00:07:07.873 ] 00:07:07.873 } 00:07:08.132 [2024-11-19 01:48:18.560461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.132 [2024-11-19 01:48:18.579604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.132 [2024-11-19 01:48:18.607752] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:08.391  [2024-11-19T01:48:19.006Z] Copying: 65/65 [MB] (average 1083 MBps) 00:07:08.391 00:07:08.391 01:48:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:08.391 01:48:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:08.391 01:48:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:08.391 01:48:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:08.650 [2024-11-19 01:48:19.032471] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:08.650 [2024-11-19 01:48:19.032593] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72981 ] 00:07:08.650 { 00:07:08.650 "subsystems": [ 00:07:08.650 { 00:07:08.650 "subsystem": "bdev", 00:07:08.650 "config": [ 00:07:08.650 { 00:07:08.650 "params": { 00:07:08.650 "trtype": "pcie", 00:07:08.650 "traddr": "0000:00:10.0", 00:07:08.650 "name": "Nvme0" 00:07:08.650 }, 00:07:08.650 "method": "bdev_nvme_attach_controller" 00:07:08.650 }, 00:07:08.650 { 00:07:08.650 "params": { 00:07:08.650 "trtype": "pcie", 00:07:08.650 "traddr": "0000:00:11.0", 00:07:08.650 "name": "Nvme1" 00:07:08.650 }, 00:07:08.650 "method": "bdev_nvme_attach_controller" 00:07:08.650 }, 00:07:08.650 { 00:07:08.650 "method": "bdev_wait_for_examine" 00:07:08.650 } 00:07:08.650 ] 00:07:08.650 } 00:07:08.650 ] 00:07:08.650 } 00:07:08.650 [2024-11-19 01:48:19.177170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.650 [2024-11-19 01:48:19.195721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.650 [2024-11-19 01:48:19.224171] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:08.909  [2024-11-19T01:48:19.524Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:08.909 00:07:08.909 01:48:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:08.909 01:48:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:08.909 00:07:08.909 real 0m2.275s 00:07:08.909 user 0m1.669s 00:07:08.909 sys 0m0.596s 00:07:08.909 01:48:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.909 01:48:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:08.909 ************************************ 00:07:08.909 END TEST dd_offset_magic 00:07:08.909 ************************************ 00:07:09.168 01:48:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:09.168 01:48:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:09.168 01:48:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:09.168 01:48:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:09.168 01:48:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:09.168 01:48:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:09.168 01:48:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:09.168 01:48:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:09.168 01:48:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:09.168 01:48:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:09.168 01:48:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:09.168 [2024-11-19 01:48:19.591812] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:09.168 [2024-11-19 01:48:19.591911] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73010 ] 00:07:09.168 { 00:07:09.168 "subsystems": [ 00:07:09.168 { 00:07:09.168 "subsystem": "bdev", 00:07:09.168 "config": [ 00:07:09.168 { 00:07:09.168 "params": { 00:07:09.168 "trtype": "pcie", 00:07:09.168 "traddr": "0000:00:10.0", 00:07:09.168 "name": "Nvme0" 00:07:09.168 }, 00:07:09.168 "method": "bdev_nvme_attach_controller" 00:07:09.168 }, 00:07:09.168 { 00:07:09.168 "params": { 00:07:09.168 "trtype": "pcie", 00:07:09.168 "traddr": "0000:00:11.0", 00:07:09.168 "name": "Nvme1" 00:07:09.168 }, 00:07:09.168 "method": "bdev_nvme_attach_controller" 00:07:09.168 }, 00:07:09.168 { 00:07:09.168 "method": "bdev_wait_for_examine" 00:07:09.168 } 00:07:09.168 ] 00:07:09.168 } 00:07:09.168 ] 00:07:09.168 } 00:07:09.168 [2024-11-19 01:48:19.738709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.168 [2024-11-19 01:48:19.757095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.427 [2024-11-19 01:48:19.785655] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:09.427  [2024-11-19T01:48:20.301Z] Copying: 5120/5120 [kB] (average 1666 MBps) 00:07:09.686 00:07:09.686 01:48:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:09.686 01:48:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:09.686 01:48:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:09.686 01:48:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:09.686 01:48:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:09.686 01:48:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:09.686 01:48:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:09.686 01:48:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:09.686 01:48:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:09.686 01:48:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:09.686 [2024-11-19 01:48:20.116994] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:09.686 [2024-11-19 01:48:20.117100] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73028 ] 00:07:09.686 { 00:07:09.686 "subsystems": [ 00:07:09.686 { 00:07:09.686 "subsystem": "bdev", 00:07:09.686 "config": [ 00:07:09.686 { 00:07:09.686 "params": { 00:07:09.686 "trtype": "pcie", 00:07:09.686 "traddr": "0000:00:10.0", 00:07:09.686 "name": "Nvme0" 00:07:09.686 }, 00:07:09.686 "method": "bdev_nvme_attach_controller" 00:07:09.686 }, 00:07:09.686 { 00:07:09.686 "params": { 00:07:09.686 "trtype": "pcie", 00:07:09.686 "traddr": "0000:00:11.0", 00:07:09.686 "name": "Nvme1" 00:07:09.686 }, 00:07:09.686 "method": "bdev_nvme_attach_controller" 00:07:09.686 }, 00:07:09.686 { 00:07:09.686 "method": "bdev_wait_for_examine" 00:07:09.686 } 00:07:09.686 ] 00:07:09.686 } 00:07:09.686 ] 00:07:09.686 } 00:07:09.686 [2024-11-19 01:48:20.262091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.686 [2024-11-19 01:48:20.282252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.945 [2024-11-19 01:48:20.312371] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:09.945  [2024-11-19T01:48:20.818Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:07:10.203 00:07:10.203 01:48:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:10.203 00:07:10.203 real 0m5.853s 00:07:10.203 user 0m4.408s 00:07:10.203 sys 0m2.766s 00:07:10.203 01:48:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.203 01:48:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:10.203 ************************************ 00:07:10.203 END TEST spdk_dd_bdev_to_bdev 00:07:10.203 ************************************ 00:07:10.203 01:48:20 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:10.203 01:48:20 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:10.203 01:48:20 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.203 01:48:20 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.203 01:48:20 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:10.203 ************************************ 00:07:10.203 START TEST spdk_dd_uring 00:07:10.203 ************************************ 00:07:10.203 01:48:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:10.203 * Looking for test storage... 00:07:10.203 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:10.203 01:48:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:10.203 01:48:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lcov --version 00:07:10.203 01:48:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:10.462 01:48:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:10.462 01:48:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.462 01:48:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.462 01:48:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.462 01:48:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.462 01:48:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.462 01:48:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.462 01:48:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.462 01:48:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.462 01:48:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.462 01:48:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.462 01:48:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.462 01:48:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:07:10.462 01:48:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:07:10.462 01:48:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.462 01:48:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.462 01:48:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:07:10.462 01:48:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:07:10.462 01:48:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.462 01:48:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:07:10.462 01:48:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.462 01:48:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:07:10.462 01:48:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:07:10.462 01:48:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.462 01:48:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:07:10.462 01:48:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.462 01:48:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.462 01:48:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.462 01:48:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:07:10.462 01:48:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.462 01:48:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:10.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.463 --rc genhtml_branch_coverage=1 00:07:10.463 --rc genhtml_function_coverage=1 00:07:10.463 --rc genhtml_legend=1 00:07:10.463 --rc geninfo_all_blocks=1 00:07:10.463 --rc geninfo_unexecuted_blocks=1 00:07:10.463 00:07:10.463 ' 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:10.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.463 --rc genhtml_branch_coverage=1 00:07:10.463 --rc genhtml_function_coverage=1 00:07:10.463 --rc genhtml_legend=1 00:07:10.463 --rc geninfo_all_blocks=1 00:07:10.463 --rc geninfo_unexecuted_blocks=1 00:07:10.463 00:07:10.463 ' 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:10.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.463 --rc genhtml_branch_coverage=1 00:07:10.463 --rc genhtml_function_coverage=1 00:07:10.463 --rc genhtml_legend=1 00:07:10.463 --rc geninfo_all_blocks=1 00:07:10.463 --rc geninfo_unexecuted_blocks=1 00:07:10.463 00:07:10.463 ' 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:10.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.463 --rc genhtml_branch_coverage=1 00:07:10.463 --rc genhtml_function_coverage=1 00:07:10.463 --rc genhtml_legend=1 00:07:10.463 --rc geninfo_all_blocks=1 00:07:10.463 --rc geninfo_unexecuted_blocks=1 00:07:10.463 00:07:10.463 ' 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:10.463 ************************************ 00:07:10.463 START TEST dd_uring_copy 00:07:10.463 ************************************ 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=vjqegzbwnyedmf1deb3kaz7tl7s11ndxtecju6kp7rqummsz4ailgwodvh4u4m727htqgj27a9sm00ikdpjyqwrwac6wk67je4fmmrr9amq8esd898b3cdz9bxsycmak88j45lgvwtqtcmbhvx8j54j3gx6jnomjfmynlg4392815ewf38clagobqhobtg3v22ti3y8u5ukatxo3r6fywj9htvim6elkyxuez0k3zwxjzenr4poeljgnl482a0ywo77xxrh7m73o0qf0z1bziieydfgbnfa1l8w3px8yczp0s4fsj2tr5timdivx7555i2j29652q85epeefpaqvs2zym63tl8k3g51ckl2a6e1nuntzy5220qetp2lywxr9v5kwfd7f2rfarc0eaaufcr9ah91zjxs3alc444t7aatl5e2h9o4mlecznb84hjz5ofs6rkd9aaxcr0ytcdniichhfixumdxte938u2mnstr8n9bbm2gfm17v0pcdg9xfuks6v2gt4vo9e3b4aywiaewjby3dyaea1ma7yzagveqzqni9a7fjlzxp1b7eyt7yc6ks8bis1ylrpnq7o4zjakxr94d88nlef18t45ut3r8rrct7zwj7zbxtrfz5ami7vs7buhg9hvoh154b5b5o5xaopz5wzao606v10pr6cixwwefxxtgnynnokfmcu6lhmr5fz6v6txlgx8j3trael59c01io4y02olg39x8h6fvonrkj5r491ih1x3a9fhp9jvaj64i0wsk1utmaebu4tx3sqo7s0y1rdgxww4ukcy4mckz1a0qeiyxcsl5lufqadivkag9cnkjzgi8ai1vzboppw9hhwwnkobkt2ae9v7nbdplwe1drn3hltq7xz8xvy5q9gqnj6jf2mk10rbfk8qj1b01grhp9h3ezzzh8z5g95z71826z2dcs7azgz9cwfjt53i2pzgfvjsvx28x6lvg7vgpqzyvyr1yktfi9h82d3olk4tq5lr2ktxkmg416 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo vjqegzbwnyedmf1deb3kaz7tl7s11ndxtecju6kp7rqummsz4ailgwodvh4u4m727htqgj27a9sm00ikdpjyqwrwac6wk67je4fmmrr9amq8esd898b3cdz9bxsycmak88j45lgvwtqtcmbhvx8j54j3gx6jnomjfmynlg4392815ewf38clagobqhobtg3v22ti3y8u5ukatxo3r6fywj9htvim6elkyxuez0k3zwxjzenr4poeljgnl482a0ywo77xxrh7m73o0qf0z1bziieydfgbnfa1l8w3px8yczp0s4fsj2tr5timdivx7555i2j29652q85epeefpaqvs2zym63tl8k3g51ckl2a6e1nuntzy5220qetp2lywxr9v5kwfd7f2rfarc0eaaufcr9ah91zjxs3alc444t7aatl5e2h9o4mlecznb84hjz5ofs6rkd9aaxcr0ytcdniichhfixumdxte938u2mnstr8n9bbm2gfm17v0pcdg9xfuks6v2gt4vo9e3b4aywiaewjby3dyaea1ma7yzagveqzqni9a7fjlzxp1b7eyt7yc6ks8bis1ylrpnq7o4zjakxr94d88nlef18t45ut3r8rrct7zwj7zbxtrfz5ami7vs7buhg9hvoh154b5b5o5xaopz5wzao606v10pr6cixwwefxxtgnynnokfmcu6lhmr5fz6v6txlgx8j3trael59c01io4y02olg39x8h6fvonrkj5r491ih1x3a9fhp9jvaj64i0wsk1utmaebu4tx3sqo7s0y1rdgxww4ukcy4mckz1a0qeiyxcsl5lufqadivkag9cnkjzgi8ai1vzboppw9hhwwnkobkt2ae9v7nbdplwe1drn3hltq7xz8xvy5q9gqnj6jf2mk10rbfk8qj1b01grhp9h3ezzzh8z5g95z71826z2dcs7azgz9cwfjt53i2pzgfvjsvx28x6lvg7vgpqzyvyr1yktfi9h82d3olk4tq5lr2ktxkmg416 00:07:10.463 01:48:20 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:10.463 [2024-11-19 01:48:20.949924] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:10.463 [2024-11-19 01:48:20.950023] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73105 ] 00:07:10.722 [2024-11-19 01:48:21.097623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.722 [2024-11-19 01:48:21.118388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.722 [2024-11-19 01:48:21.147275] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:11.291  [2024-11-19T01:48:21.906Z] Copying: 511/511 [MB] (average 1599 MBps) 00:07:11.291 00:07:11.291 01:48:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:11.291 01:48:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:07:11.291 01:48:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:11.291 01:48:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:11.291 [2024-11-19 01:48:21.873958] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:11.291 [2024-11-19 01:48:21.874534] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73117 ] 00:07:11.291 { 00:07:11.291 "subsystems": [ 00:07:11.291 { 00:07:11.291 "subsystem": "bdev", 00:07:11.291 "config": [ 00:07:11.291 { 00:07:11.291 "params": { 00:07:11.291 "block_size": 512, 00:07:11.291 "num_blocks": 1048576, 00:07:11.291 "name": "malloc0" 00:07:11.291 }, 00:07:11.291 "method": "bdev_malloc_create" 00:07:11.291 }, 00:07:11.291 { 00:07:11.291 "params": { 00:07:11.291 "filename": "/dev/zram1", 00:07:11.291 "name": "uring0" 00:07:11.291 }, 00:07:11.291 "method": "bdev_uring_create" 00:07:11.291 }, 00:07:11.291 { 00:07:11.291 "method": "bdev_wait_for_examine" 00:07:11.291 } 00:07:11.291 ] 00:07:11.291 } 00:07:11.291 ] 00:07:11.291 } 00:07:11.550 [2024-11-19 01:48:22.014391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.550 [2024-11-19 01:48:22.032665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.550 [2024-11-19 01:48:22.060184] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:12.928  [2024-11-19T01:48:24.481Z] Copying: 246/512 [MB] (246 MBps) [2024-11-19T01:48:24.481Z] Copying: 505/512 [MB] (259 MBps) [2024-11-19T01:48:24.481Z] Copying: 512/512 [MB] (average 252 MBps) 00:07:13.866 00:07:13.866 01:48:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:13.866 01:48:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:07:13.866 01:48:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:13.866 01:48:24 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:13.866 [2024-11-19 01:48:24.475796] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:13.866 [2024-11-19 01:48:24.475884] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73155 ] 00:07:14.124 { 00:07:14.124 "subsystems": [ 00:07:14.124 { 00:07:14.125 "subsystem": "bdev", 00:07:14.125 "config": [ 00:07:14.125 { 00:07:14.125 "params": { 00:07:14.125 "block_size": 512, 00:07:14.125 "num_blocks": 1048576, 00:07:14.125 "name": "malloc0" 00:07:14.125 }, 00:07:14.125 "method": "bdev_malloc_create" 00:07:14.125 }, 00:07:14.125 { 00:07:14.125 "params": { 00:07:14.125 "filename": "/dev/zram1", 00:07:14.125 "name": "uring0" 00:07:14.125 }, 00:07:14.125 "method": "bdev_uring_create" 00:07:14.125 }, 00:07:14.125 { 00:07:14.125 "method": "bdev_wait_for_examine" 00:07:14.125 } 00:07:14.125 ] 00:07:14.125 } 00:07:14.125 ] 00:07:14.125 } 00:07:14.125 [2024-11-19 01:48:24.619238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.125 [2024-11-19 01:48:24.639541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.125 [2024-11-19 01:48:24.671941] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:15.501  [2024-11-19T01:48:27.053Z] Copying: 188/512 [MB] (188 MBps) [2024-11-19T01:48:27.622Z] Copying: 368/512 [MB] (179 MBps) [2024-11-19T01:48:27.881Z] Copying: 512/512 [MB] (average 181 MBps) 00:07:17.266 00:07:17.266 01:48:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:17.267 01:48:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ vjqegzbwnyedmf1deb3kaz7tl7s11ndxtecju6kp7rqummsz4ailgwodvh4u4m727htqgj27a9sm00ikdpjyqwrwac6wk67je4fmmrr9amq8esd898b3cdz9bxsycmak88j45lgvwtqtcmbhvx8j54j3gx6jnomjfmynlg4392815ewf38clagobqhobtg3v22ti3y8u5ukatxo3r6fywj9htvim6elkyxuez0k3zwxjzenr4poeljgnl482a0ywo77xxrh7m73o0qf0z1bziieydfgbnfa1l8w3px8yczp0s4fsj2tr5timdivx7555i2j29652q85epeefpaqvs2zym63tl8k3g51ckl2a6e1nuntzy5220qetp2lywxr9v5kwfd7f2rfarc0eaaufcr9ah91zjxs3alc444t7aatl5e2h9o4mlecznb84hjz5ofs6rkd9aaxcr0ytcdniichhfixumdxte938u2mnstr8n9bbm2gfm17v0pcdg9xfuks6v2gt4vo9e3b4aywiaewjby3dyaea1ma7yzagveqzqni9a7fjlzxp1b7eyt7yc6ks8bis1ylrpnq7o4zjakxr94d88nlef18t45ut3r8rrct7zwj7zbxtrfz5ami7vs7buhg9hvoh154b5b5o5xaopz5wzao606v10pr6cixwwefxxtgnynnokfmcu6lhmr5fz6v6txlgx8j3trael59c01io4y02olg39x8h6fvonrkj5r491ih1x3a9fhp9jvaj64i0wsk1utmaebu4tx3sqo7s0y1rdgxww4ukcy4mckz1a0qeiyxcsl5lufqadivkag9cnkjzgi8ai1vzboppw9hhwwnkobkt2ae9v7nbdplwe1drn3hltq7xz8xvy5q9gqnj6jf2mk10rbfk8qj1b01grhp9h3ezzzh8z5g95z71826z2dcs7azgz9cwfjt53i2pzgfvjsvx28x6lvg7vgpqzyvyr1yktfi9h82d3olk4tq5lr2ktxkmg416 == \v\j\q\e\g\z\b\w\n\y\e\d\m\f\1\d\e\b\3\k\a\z\7\t\l\7\s\1\1\n\d\x\t\e\c\j\u\6\k\p\7\r\q\u\m\m\s\z\4\a\i\l\g\w\o\d\v\h\4\u\4\m\7\2\7\h\t\q\g\j\2\7\a\9\s\m\0\0\i\k\d\p\j\y\q\w\r\w\a\c\6\w\k\6\7\j\e\4\f\m\m\r\r\9\a\m\q\8\e\s\d\8\9\8\b\3\c\d\z\9\b\x\s\y\c\m\a\k\8\8\j\4\5\l\g\v\w\t\q\t\c\m\b\h\v\x\8\j\5\4\j\3\g\x\6\j\n\o\m\j\f\m\y\n\l\g\4\3\9\2\8\1\5\e\w\f\3\8\c\l\a\g\o\b\q\h\o\b\t\g\3\v\2\2\t\i\3\y\8\u\5\u\k\a\t\x\o\3\r\6\f\y\w\j\9\h\t\v\i\m\6\e\l\k\y\x\u\e\z\0\k\3\z\w\x\j\z\e\n\r\4\p\o\e\l\j\g\n\l\4\8\2\a\0\y\w\o\7\7\x\x\r\h\7\m\7\3\o\0\q\f\0\z\1\b\z\i\i\e\y\d\f\g\b\n\f\a\1\l\8\w\3\p\x\8\y\c\z\p\0\s\4\f\s\j\2\t\r\5\t\i\m\d\i\v\x\7\5\5\5\i\2\j\2\9\6\5\2\q\8\5\e\p\e\e\f\p\a\q\v\s\2\z\y\m\6\3\t\l\8\k\3\g\5\1\c\k\l\2\a\6\e\1\n\u\n\t\z\y\5\2\2\0\q\e\t\p\2\l\y\w\x\r\9\v\5\k\w\f\d\7\f\2\r\f\a\r\c\0\e\a\a\u\f\c\r\9\a\h\9\1\z\j\x\s\3\a\l\c\4\4\4\t\7\a\a\t\l\5\e\2\h\9\o\4\m\l\e\c\z\n\b\8\4\h\j\z\5\o\f\s\6\r\k\d\9\a\a\x\c\r\0\y\t\c\d\n\i\i\c\h\h\f\i\x\u\m\d\x\t\e\9\3\8\u\2\m\n\s\t\r\8\n\9\b\b\m\2\g\f\m\1\7\v\0\p\c\d\g\9\x\f\u\k\s\6\v\2\g\t\4\v\o\9\e\3\b\4\a\y\w\i\a\e\w\j\b\y\3\d\y\a\e\a\1\m\a\7\y\z\a\g\v\e\q\z\q\n\i\9\a\7\f\j\l\z\x\p\1\b\7\e\y\t\7\y\c\6\k\s\8\b\i\s\1\y\l\r\p\n\q\7\o\4\z\j\a\k\x\r\9\4\d\8\8\n\l\e\f\1\8\t\4\5\u\t\3\r\8\r\r\c\t\7\z\w\j\7\z\b\x\t\r\f\z\5\a\m\i\7\v\s\7\b\u\h\g\9\h\v\o\h\1\5\4\b\5\b\5\o\5\x\a\o\p\z\5\w\z\a\o\6\0\6\v\1\0\p\r\6\c\i\x\w\w\e\f\x\x\t\g\n\y\n\n\o\k\f\m\c\u\6\l\h\m\r\5\f\z\6\v\6\t\x\l\g\x\8\j\3\t\r\a\e\l\5\9\c\0\1\i\o\4\y\0\2\o\l\g\3\9\x\8\h\6\f\v\o\n\r\k\j\5\r\4\9\1\i\h\1\x\3\a\9\f\h\p\9\j\v\a\j\6\4\i\0\w\s\k\1\u\t\m\a\e\b\u\4\t\x\3\s\q\o\7\s\0\y\1\r\d\g\x\w\w\4\u\k\c\y\4\m\c\k\z\1\a\0\q\e\i\y\x\c\s\l\5\l\u\f\q\a\d\i\v\k\a\g\9\c\n\k\j\z\g\i\8\a\i\1\v\z\b\o\p\p\w\9\h\h\w\w\n\k\o\b\k\t\2\a\e\9\v\7\n\b\d\p\l\w\e\1\d\r\n\3\h\l\t\q\7\x\z\8\x\v\y\5\q\9\g\q\n\j\6\j\f\2\m\k\1\0\r\b\f\k\8\q\j\1\b\0\1\g\r\h\p\9\h\3\e\z\z\z\h\8\z\5\g\9\5\z\7\1\8\2\6\z\2\d\c\s\7\a\z\g\z\9\c\w\f\j\t\5\3\i\2\p\z\g\f\v\j\s\v\x\2\8\x\6\l\v\g\7\v\g\p\q\z\y\v\y\r\1\y\k\t\f\i\9\h\8\2\d\3\o\l\k\4\t\q\5\l\r\2\k\t\x\k\m\g\4\1\6 ]] 00:07:17.267 01:48:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:17.267 01:48:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ vjqegzbwnyedmf1deb3kaz7tl7s11ndxtecju6kp7rqummsz4ailgwodvh4u4m727htqgj27a9sm00ikdpjyqwrwac6wk67je4fmmrr9amq8esd898b3cdz9bxsycmak88j45lgvwtqtcmbhvx8j54j3gx6jnomjfmynlg4392815ewf38clagobqhobtg3v22ti3y8u5ukatxo3r6fywj9htvim6elkyxuez0k3zwxjzenr4poeljgnl482a0ywo77xxrh7m73o0qf0z1bziieydfgbnfa1l8w3px8yczp0s4fsj2tr5timdivx7555i2j29652q85epeefpaqvs2zym63tl8k3g51ckl2a6e1nuntzy5220qetp2lywxr9v5kwfd7f2rfarc0eaaufcr9ah91zjxs3alc444t7aatl5e2h9o4mlecznb84hjz5ofs6rkd9aaxcr0ytcdniichhfixumdxte938u2mnstr8n9bbm2gfm17v0pcdg9xfuks6v2gt4vo9e3b4aywiaewjby3dyaea1ma7yzagveqzqni9a7fjlzxp1b7eyt7yc6ks8bis1ylrpnq7o4zjakxr94d88nlef18t45ut3r8rrct7zwj7zbxtrfz5ami7vs7buhg9hvoh154b5b5o5xaopz5wzao606v10pr6cixwwefxxtgnynnokfmcu6lhmr5fz6v6txlgx8j3trael59c01io4y02olg39x8h6fvonrkj5r491ih1x3a9fhp9jvaj64i0wsk1utmaebu4tx3sqo7s0y1rdgxww4ukcy4mckz1a0qeiyxcsl5lufqadivkag9cnkjzgi8ai1vzboppw9hhwwnkobkt2ae9v7nbdplwe1drn3hltq7xz8xvy5q9gqnj6jf2mk10rbfk8qj1b01grhp9h3ezzzh8z5g95z71826z2dcs7azgz9cwfjt53i2pzgfvjsvx28x6lvg7vgpqzyvyr1yktfi9h82d3olk4tq5lr2ktxkmg416 == \v\j\q\e\g\z\b\w\n\y\e\d\m\f\1\d\e\b\3\k\a\z\7\t\l\7\s\1\1\n\d\x\t\e\c\j\u\6\k\p\7\r\q\u\m\m\s\z\4\a\i\l\g\w\o\d\v\h\4\u\4\m\7\2\7\h\t\q\g\j\2\7\a\9\s\m\0\0\i\k\d\p\j\y\q\w\r\w\a\c\6\w\k\6\7\j\e\4\f\m\m\r\r\9\a\m\q\8\e\s\d\8\9\8\b\3\c\d\z\9\b\x\s\y\c\m\a\k\8\8\j\4\5\l\g\v\w\t\q\t\c\m\b\h\v\x\8\j\5\4\j\3\g\x\6\j\n\o\m\j\f\m\y\n\l\g\4\3\9\2\8\1\5\e\w\f\3\8\c\l\a\g\o\b\q\h\o\b\t\g\3\v\2\2\t\i\3\y\8\u\5\u\k\a\t\x\o\3\r\6\f\y\w\j\9\h\t\v\i\m\6\e\l\k\y\x\u\e\z\0\k\3\z\w\x\j\z\e\n\r\4\p\o\e\l\j\g\n\l\4\8\2\a\0\y\w\o\7\7\x\x\r\h\7\m\7\3\o\0\q\f\0\z\1\b\z\i\i\e\y\d\f\g\b\n\f\a\1\l\8\w\3\p\x\8\y\c\z\p\0\s\4\f\s\j\2\t\r\5\t\i\m\d\i\v\x\7\5\5\5\i\2\j\2\9\6\5\2\q\8\5\e\p\e\e\f\p\a\q\v\s\2\z\y\m\6\3\t\l\8\k\3\g\5\1\c\k\l\2\a\6\e\1\n\u\n\t\z\y\5\2\2\0\q\e\t\p\2\l\y\w\x\r\9\v\5\k\w\f\d\7\f\2\r\f\a\r\c\0\e\a\a\u\f\c\r\9\a\h\9\1\z\j\x\s\3\a\l\c\4\4\4\t\7\a\a\t\l\5\e\2\h\9\o\4\m\l\e\c\z\n\b\8\4\h\j\z\5\o\f\s\6\r\k\d\9\a\a\x\c\r\0\y\t\c\d\n\i\i\c\h\h\f\i\x\u\m\d\x\t\e\9\3\8\u\2\m\n\s\t\r\8\n\9\b\b\m\2\g\f\m\1\7\v\0\p\c\d\g\9\x\f\u\k\s\6\v\2\g\t\4\v\o\9\e\3\b\4\a\y\w\i\a\e\w\j\b\y\3\d\y\a\e\a\1\m\a\7\y\z\a\g\v\e\q\z\q\n\i\9\a\7\f\j\l\z\x\p\1\b\7\e\y\t\7\y\c\6\k\s\8\b\i\s\1\y\l\r\p\n\q\7\o\4\z\j\a\k\x\r\9\4\d\8\8\n\l\e\f\1\8\t\4\5\u\t\3\r\8\r\r\c\t\7\z\w\j\7\z\b\x\t\r\f\z\5\a\m\i\7\v\s\7\b\u\h\g\9\h\v\o\h\1\5\4\b\5\b\5\o\5\x\a\o\p\z\5\w\z\a\o\6\0\6\v\1\0\p\r\6\c\i\x\w\w\e\f\x\x\t\g\n\y\n\n\o\k\f\m\c\u\6\l\h\m\r\5\f\z\6\v\6\t\x\l\g\x\8\j\3\t\r\a\e\l\5\9\c\0\1\i\o\4\y\0\2\o\l\g\3\9\x\8\h\6\f\v\o\n\r\k\j\5\r\4\9\1\i\h\1\x\3\a\9\f\h\p\9\j\v\a\j\6\4\i\0\w\s\k\1\u\t\m\a\e\b\u\4\t\x\3\s\q\o\7\s\0\y\1\r\d\g\x\w\w\4\u\k\c\y\4\m\c\k\z\1\a\0\q\e\i\y\x\c\s\l\5\l\u\f\q\a\d\i\v\k\a\g\9\c\n\k\j\z\g\i\8\a\i\1\v\z\b\o\p\p\w\9\h\h\w\w\n\k\o\b\k\t\2\a\e\9\v\7\n\b\d\p\l\w\e\1\d\r\n\3\h\l\t\q\7\x\z\8\x\v\y\5\q\9\g\q\n\j\6\j\f\2\m\k\1\0\r\b\f\k\8\q\j\1\b\0\1\g\r\h\p\9\h\3\e\z\z\z\h\8\z\5\g\9\5\z\7\1\8\2\6\z\2\d\c\s\7\a\z\g\z\9\c\w\f\j\t\5\3\i\2\p\z\g\f\v\j\s\v\x\2\8\x\6\l\v\g\7\v\g\p\q\z\y\v\y\r\1\y\k\t\f\i\9\h\8\2\d\3\o\l\k\4\t\q\5\l\r\2\k\t\x\k\m\g\4\1\6 ]] 00:07:17.267 01:48:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:17.836 01:48:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:17.836 01:48:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:07:17.836 01:48:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:17.836 01:48:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:17.836 [2024-11-19 01:48:28.260186] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:17.836 [2024-11-19 01:48:28.260294] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73217 ] 00:07:17.836 { 00:07:17.836 "subsystems": [ 00:07:17.836 { 00:07:17.836 "subsystem": "bdev", 00:07:17.836 "config": [ 00:07:17.836 { 00:07:17.836 "params": { 00:07:17.836 "block_size": 512, 00:07:17.836 "num_blocks": 1048576, 00:07:17.836 "name": "malloc0" 00:07:17.836 }, 00:07:17.836 "method": "bdev_malloc_create" 00:07:17.836 }, 00:07:17.836 { 00:07:17.836 "params": { 00:07:17.836 "filename": "/dev/zram1", 00:07:17.836 "name": "uring0" 00:07:17.836 }, 00:07:17.836 "method": "bdev_uring_create" 00:07:17.836 }, 00:07:17.836 { 00:07:17.836 "method": "bdev_wait_for_examine" 00:07:17.836 } 00:07:17.836 ] 00:07:17.836 } 00:07:17.836 ] 00:07:17.836 } 00:07:17.836 [2024-11-19 01:48:28.408496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.836 [2024-11-19 01:48:28.428486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.096 [2024-11-19 01:48:28.457999] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.034  [2024-11-19T01:48:30.586Z] Copying: 168/512 [MB] (168 MBps) [2024-11-19T01:48:31.965Z] Copying: 339/512 [MB] (171 MBps) [2024-11-19T01:48:31.965Z] Copying: 507/512 [MB] (167 MBps) [2024-11-19T01:48:31.965Z] Copying: 512/512 [MB] (average 168 MBps) 00:07:21.350 00:07:21.350 01:48:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:21.350 01:48:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:21.350 01:48:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:21.350 01:48:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:07:21.350 01:48:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:21.350 01:48:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:21.350 01:48:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:21.350 01:48:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:21.350 [2024-11-19 01:48:31.870234] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:21.350 [2024-11-19 01:48:31.870330] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73265 ] 00:07:21.350 { 00:07:21.350 "subsystems": [ 00:07:21.350 { 00:07:21.351 "subsystem": "bdev", 00:07:21.351 "config": [ 00:07:21.351 { 00:07:21.351 "params": { 00:07:21.351 "block_size": 512, 00:07:21.351 "num_blocks": 1048576, 00:07:21.351 "name": "malloc0" 00:07:21.351 }, 00:07:21.351 "method": "bdev_malloc_create" 00:07:21.351 }, 00:07:21.351 { 00:07:21.351 "params": { 00:07:21.351 "filename": "/dev/zram1", 00:07:21.351 "name": "uring0" 00:07:21.351 }, 00:07:21.351 "method": "bdev_uring_create" 00:07:21.351 }, 00:07:21.351 { 00:07:21.351 "params": { 00:07:21.351 "name": "uring0" 00:07:21.351 }, 00:07:21.351 "method": "bdev_uring_delete" 00:07:21.351 }, 00:07:21.351 { 00:07:21.351 "method": "bdev_wait_for_examine" 00:07:21.351 } 00:07:21.351 ] 00:07:21.351 } 00:07:21.351 ] 00:07:21.351 } 00:07:21.609 [2024-11-19 01:48:32.014377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.609 [2024-11-19 01:48:32.034143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.609 [2024-11-19 01:48:32.062531] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.609  [2024-11-19T01:48:32.483Z] Copying: 0/0 [B] (average 0 Bps) 00:07:21.868 00:07:21.868 01:48:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:07:21.868 01:48:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:21.868 01:48:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:07:21.868 01:48:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:07:21.868 01:48:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:21.868 01:48:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.868 01:48:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:21.868 01:48:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:21.868 01:48:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.868 01:48:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.868 01:48:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.868 01:48:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.868 01:48:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.868 01:48:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:21.869 01:48:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:21.869 01:48:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:21.869 [2024-11-19 01:48:32.460988] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:21.869 [2024-11-19 01:48:32.461635] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73290 ] 00:07:21.869 { 00:07:21.869 "subsystems": [ 00:07:21.869 { 00:07:21.869 "subsystem": "bdev", 00:07:21.869 "config": [ 00:07:21.869 { 00:07:21.869 "params": { 00:07:21.869 "block_size": 512, 00:07:21.869 "num_blocks": 1048576, 00:07:21.869 "name": "malloc0" 00:07:21.869 }, 00:07:21.869 "method": "bdev_malloc_create" 00:07:21.869 }, 00:07:21.869 { 00:07:21.869 "params": { 00:07:21.869 "filename": "/dev/zram1", 00:07:21.869 "name": "uring0" 00:07:21.869 }, 00:07:21.869 "method": "bdev_uring_create" 00:07:21.869 }, 00:07:21.869 { 00:07:21.869 "params": { 00:07:21.869 "name": "uring0" 00:07:21.869 }, 00:07:21.869 "method": "bdev_uring_delete" 00:07:21.869 }, 00:07:21.869 { 00:07:21.869 "method": "bdev_wait_for_examine" 00:07:21.869 } 00:07:21.869 ] 00:07:21.869 } 00:07:21.869 ] 00:07:21.869 } 00:07:22.128 [2024-11-19 01:48:32.608824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.128 [2024-11-19 01:48:32.628958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.128 [2024-11-19 01:48:32.660220] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:22.387 [2024-11-19 01:48:32.784383] bdev.c:8259:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:22.387 [2024-11-19 01:48:32.784451] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:22.387 [2024-11-19 01:48:32.784477] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:07:22.387 [2024-11-19 01:48:32.784486] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:22.387 [2024-11-19 01:48:32.968013] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:22.646 01:48:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:07:22.646 01:48:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:22.646 01:48:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:07:22.646 01:48:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:07:22.646 01:48:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:07:22.646 01:48:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:22.646 01:48:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:22.646 01:48:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:07:22.646 01:48:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:07:22.646 01:48:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:07:22.646 01:48:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:07:22.646 01:48:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:22.646 00:07:22.646 real 0m12.191s 00:07:22.646 user 0m8.445s 00:07:22.646 sys 0m10.418s 00:07:22.646 ************************************ 00:07:22.646 END TEST dd_uring_copy 00:07:22.646 ************************************ 00:07:22.646 01:48:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.646 01:48:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:22.646 00:07:22.646 real 0m12.439s 00:07:22.646 user 0m8.595s 00:07:22.646 sys 0m10.510s 00:07:22.646 01:48:33 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.646 ************************************ 00:07:22.646 END TEST spdk_dd_uring 00:07:22.646 ************************************ 00:07:22.646 01:48:33 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:22.646 01:48:33 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:22.646 01:48:33 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:22.646 01:48:33 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.646 01:48:33 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:22.646 ************************************ 00:07:22.646 START TEST spdk_dd_sparse 00:07:22.646 ************************************ 00:07:22.646 01:48:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:22.646 * Looking for test storage... 00:07:22.646 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:22.646 01:48:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:22.646 01:48:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lcov --version 00:07:22.646 01:48:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:22.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.906 --rc genhtml_branch_coverage=1 00:07:22.906 --rc genhtml_function_coverage=1 00:07:22.906 --rc genhtml_legend=1 00:07:22.906 --rc geninfo_all_blocks=1 00:07:22.906 --rc geninfo_unexecuted_blocks=1 00:07:22.906 00:07:22.906 ' 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:22.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.906 --rc genhtml_branch_coverage=1 00:07:22.906 --rc genhtml_function_coverage=1 00:07:22.906 --rc genhtml_legend=1 00:07:22.906 --rc geninfo_all_blocks=1 00:07:22.906 --rc geninfo_unexecuted_blocks=1 00:07:22.906 00:07:22.906 ' 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:22.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.906 --rc genhtml_branch_coverage=1 00:07:22.906 --rc genhtml_function_coverage=1 00:07:22.906 --rc genhtml_legend=1 00:07:22.906 --rc geninfo_all_blocks=1 00:07:22.906 --rc geninfo_unexecuted_blocks=1 00:07:22.906 00:07:22.906 ' 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:22.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.906 --rc genhtml_branch_coverage=1 00:07:22.906 --rc genhtml_function_coverage=1 00:07:22.906 --rc genhtml_legend=1 00:07:22.906 --rc geninfo_all_blocks=1 00:07:22.906 --rc geninfo_unexecuted_blocks=1 00:07:22.906 00:07:22.906 ' 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:22.906 1+0 records in 00:07:22.906 1+0 records out 00:07:22.906 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00569801 s, 736 MB/s 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:22.906 1+0 records in 00:07:22.906 1+0 records out 00:07:22.906 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00648798 s, 646 MB/s 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:22.906 1+0 records in 00:07:22.906 1+0 records out 00:07:22.906 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00312964 s, 1.3 GB/s 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:22.906 ************************************ 00:07:22.906 START TEST dd_sparse_file_to_file 00:07:22.906 ************************************ 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:07:22.906 01:48:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:22.907 01:48:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:22.907 01:48:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:22.907 01:48:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:22.907 01:48:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:22.907 01:48:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:22.907 01:48:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:22.907 01:48:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:22.907 01:48:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:22.907 01:48:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:22.907 [2024-11-19 01:48:33.401243] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:22.907 [2024-11-19 01:48:33.401905] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73383 ] 00:07:22.907 { 00:07:22.907 "subsystems": [ 00:07:22.907 { 00:07:22.907 "subsystem": "bdev", 00:07:22.907 "config": [ 00:07:22.907 { 00:07:22.907 "params": { 00:07:22.907 "block_size": 4096, 00:07:22.907 "filename": "dd_sparse_aio_disk", 00:07:22.907 "name": "dd_aio" 00:07:22.907 }, 00:07:22.907 "method": "bdev_aio_create" 00:07:22.907 }, 00:07:22.907 { 00:07:22.907 "params": { 00:07:22.907 "lvs_name": "dd_lvstore", 00:07:22.907 "bdev_name": "dd_aio" 00:07:22.907 }, 00:07:22.907 "method": "bdev_lvol_create_lvstore" 00:07:22.907 }, 00:07:22.907 { 00:07:22.907 "method": "bdev_wait_for_examine" 00:07:22.907 } 00:07:22.907 ] 00:07:22.907 } 00:07:22.907 ] 00:07:22.907 } 00:07:23.166 [2024-11-19 01:48:33.549798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.166 [2024-11-19 01:48:33.576026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.166 [2024-11-19 01:48:33.613105] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:23.166  [2024-11-19T01:48:34.040Z] Copying: 12/36 [MB] (average 923 MBps) 00:07:23.425 00:07:23.425 01:48:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:23.425 01:48:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:23.425 01:48:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:23.425 01:48:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:23.425 01:48:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:23.425 01:48:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:23.425 01:48:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:23.425 01:48:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:23.425 01:48:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:23.425 01:48:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:23.425 00:07:23.425 real 0m0.531s 00:07:23.425 user 0m0.311s 00:07:23.425 sys 0m0.268s 00:07:23.425 01:48:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.425 01:48:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:23.425 ************************************ 00:07:23.425 END TEST dd_sparse_file_to_file 00:07:23.425 ************************************ 00:07:23.425 01:48:33 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:23.425 01:48:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.425 01:48:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.425 01:48:33 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:23.425 ************************************ 00:07:23.425 START TEST dd_sparse_file_to_bdev 00:07:23.425 ************************************ 00:07:23.425 01:48:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:07:23.425 01:48:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:23.425 01:48:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:23.425 01:48:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:07:23.425 01:48:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:23.425 01:48:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:23.425 01:48:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:07:23.425 01:48:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:23.425 01:48:33 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:23.425 [2024-11-19 01:48:33.978755] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:23.425 [2024-11-19 01:48:33.978867] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73431 ] 00:07:23.425 { 00:07:23.425 "subsystems": [ 00:07:23.425 { 00:07:23.425 "subsystem": "bdev", 00:07:23.425 "config": [ 00:07:23.425 { 00:07:23.425 "params": { 00:07:23.425 "block_size": 4096, 00:07:23.425 "filename": "dd_sparse_aio_disk", 00:07:23.425 "name": "dd_aio" 00:07:23.425 }, 00:07:23.425 "method": "bdev_aio_create" 00:07:23.425 }, 00:07:23.425 { 00:07:23.425 "params": { 00:07:23.425 "lvs_name": "dd_lvstore", 00:07:23.425 "lvol_name": "dd_lvol", 00:07:23.425 "size_in_mib": 36, 00:07:23.425 "thin_provision": true 00:07:23.425 }, 00:07:23.425 "method": "bdev_lvol_create" 00:07:23.425 }, 00:07:23.425 { 00:07:23.425 "method": "bdev_wait_for_examine" 00:07:23.425 } 00:07:23.425 ] 00:07:23.425 } 00:07:23.425 ] 00:07:23.425 } 00:07:23.684 [2024-11-19 01:48:34.130479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.684 [2024-11-19 01:48:34.155473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.684 [2024-11-19 01:48:34.189588] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:23.684  [2024-11-19T01:48:34.558Z] Copying: 12/36 [MB] (average 521 MBps) 00:07:23.943 00:07:23.943 00:07:23.943 real 0m0.489s 00:07:23.943 user 0m0.308s 00:07:23.943 sys 0m0.240s 00:07:23.943 01:48:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.943 01:48:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:23.943 ************************************ 00:07:23.943 END TEST dd_sparse_file_to_bdev 00:07:23.943 ************************************ 00:07:23.943 01:48:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:23.943 01:48:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.943 01:48:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.943 01:48:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:23.943 ************************************ 00:07:23.943 START TEST dd_sparse_bdev_to_file 00:07:23.943 ************************************ 00:07:23.943 01:48:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:07:23.943 01:48:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:23.943 01:48:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:23.943 01:48:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:23.943 01:48:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:23.943 01:48:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:23.943 01:48:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:07:23.943 01:48:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:23.943 01:48:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:23.943 [2024-11-19 01:48:34.525980] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:23.943 [2024-11-19 01:48:34.526119] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73458 ] 00:07:23.943 { 00:07:23.943 "subsystems": [ 00:07:23.943 { 00:07:23.943 "subsystem": "bdev", 00:07:23.943 "config": [ 00:07:23.943 { 00:07:23.943 "params": { 00:07:23.943 "block_size": 4096, 00:07:23.943 "filename": "dd_sparse_aio_disk", 00:07:23.943 "name": "dd_aio" 00:07:23.943 }, 00:07:23.943 "method": "bdev_aio_create" 00:07:23.943 }, 00:07:23.943 { 00:07:23.943 "method": "bdev_wait_for_examine" 00:07:23.943 } 00:07:23.943 ] 00:07:23.943 } 00:07:23.943 ] 00:07:23.943 } 00:07:24.203 [2024-11-19 01:48:34.680965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.203 [2024-11-19 01:48:34.707650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.203 [2024-11-19 01:48:34.742800] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:24.203  [2024-11-19T01:48:35.077Z] Copying: 12/36 [MB] (average 923 MBps) 00:07:24.462 00:07:24.462 01:48:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:24.462 01:48:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:24.462 01:48:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:24.462 01:48:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:24.462 01:48:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:24.462 01:48:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:24.462 01:48:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:24.462 01:48:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:24.462 01:48:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:24.462 01:48:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:24.462 00:07:24.462 real 0m0.510s 00:07:24.462 user 0m0.316s 00:07:24.462 sys 0m0.254s 00:07:24.462 01:48:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.462 01:48:34 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:24.462 ************************************ 00:07:24.462 END TEST dd_sparse_bdev_to_file 00:07:24.462 ************************************ 00:07:24.462 01:48:35 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:24.462 01:48:35 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:24.462 01:48:35 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:24.462 01:48:35 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:24.463 01:48:35 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:24.463 00:07:24.463 real 0m1.901s 00:07:24.463 user 0m1.095s 00:07:24.463 sys 0m0.964s 00:07:24.463 01:48:35 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.463 01:48:35 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:24.463 ************************************ 00:07:24.463 END TEST spdk_dd_sparse 00:07:24.463 ************************************ 00:07:24.722 01:48:35 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:24.722 01:48:35 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:24.722 01:48:35 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.722 01:48:35 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:24.722 ************************************ 00:07:24.722 START TEST spdk_dd_negative 00:07:24.722 ************************************ 00:07:24.722 01:48:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:24.722 * Looking for test storage... 00:07:24.722 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:24.722 01:48:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:24.722 01:48:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:24.722 01:48:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lcov --version 00:07:24.722 01:48:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:24.722 01:48:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:24.722 01:48:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:24.722 01:48:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:24.722 01:48:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:07:24.722 01:48:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:07:24.722 01:48:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:07:24.722 01:48:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:07:24.722 01:48:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:07:24.722 01:48:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:07:24.722 01:48:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:07:24.722 01:48:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:24.722 01:48:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:07:24.722 01:48:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:07:24.722 01:48:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:24.722 01:48:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:24.722 01:48:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:07:24.722 01:48:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:07:24.722 01:48:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:24.722 01:48:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:07:24.722 01:48:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:07:24.722 01:48:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:07:24.722 01:48:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:07:24.722 01:48:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:24.722 01:48:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:24.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.723 --rc genhtml_branch_coverage=1 00:07:24.723 --rc genhtml_function_coverage=1 00:07:24.723 --rc genhtml_legend=1 00:07:24.723 --rc geninfo_all_blocks=1 00:07:24.723 --rc geninfo_unexecuted_blocks=1 00:07:24.723 00:07:24.723 ' 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:24.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.723 --rc genhtml_branch_coverage=1 00:07:24.723 --rc genhtml_function_coverage=1 00:07:24.723 --rc genhtml_legend=1 00:07:24.723 --rc geninfo_all_blocks=1 00:07:24.723 --rc geninfo_unexecuted_blocks=1 00:07:24.723 00:07:24.723 ' 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:24.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.723 --rc genhtml_branch_coverage=1 00:07:24.723 --rc genhtml_function_coverage=1 00:07:24.723 --rc genhtml_legend=1 00:07:24.723 --rc geninfo_all_blocks=1 00:07:24.723 --rc geninfo_unexecuted_blocks=1 00:07:24.723 00:07:24.723 ' 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:24.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.723 --rc genhtml_branch_coverage=1 00:07:24.723 --rc genhtml_function_coverage=1 00:07:24.723 --rc genhtml_legend=1 00:07:24.723 --rc geninfo_all_blocks=1 00:07:24.723 --rc geninfo_unexecuted_blocks=1 00:07:24.723 00:07:24.723 ' 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:24.723 ************************************ 00:07:24.723 START TEST dd_invalid_arguments 00:07:24.723 ************************************ 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:24.723 01:48:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:24.983 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:24.983 00:07:24.983 CPU options: 00:07:24.983 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:24.983 (like [0,1,10]) 00:07:24.983 --lcores lcore to CPU mapping list. The list is in the format: 00:07:24.983 [<,lcores[@CPUs]>...] 00:07:24.983 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:24.983 Within the group, '-' is used for range separator, 00:07:24.983 ',' is used for single number separator. 00:07:24.983 '( )' can be omitted for single element group, 00:07:24.983 '@' can be omitted if cpus and lcores have the same value 00:07:24.983 --disable-cpumask-locks Disable CPU core lock files. 00:07:24.983 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:24.983 pollers in the app support interrupt mode) 00:07:24.983 -p, --main-core main (primary) core for DPDK 00:07:24.983 00:07:24.983 Configuration options: 00:07:24.983 -c, --config, --json JSON config file 00:07:24.983 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:24.983 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:24.983 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:24.983 --rpcs-allowed comma-separated list of permitted RPCS 00:07:24.983 --json-ignore-init-errors don't exit on invalid config entry 00:07:24.983 00:07:24.983 Memory options: 00:07:24.983 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:24.983 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:24.983 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:24.983 -R, --huge-unlink unlink huge files after initialization 00:07:24.983 -n, --mem-channels number of memory channels used for DPDK 00:07:24.983 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:24.983 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:24.983 --no-huge run without using hugepages 00:07:24.983 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:07:24.983 -i, --shm-id shared memory ID (optional) 00:07:24.983 -g, --single-file-segments force creating just one hugetlbfs file 00:07:24.983 00:07:24.983 PCI options: 00:07:24.983 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:24.983 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:24.983 -u, --no-pci disable PCI access 00:07:24.983 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:24.983 00:07:24.983 Log options: 00:07:24.983 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:24.983 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:24.983 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:24.983 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:24.983 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:07:24.983 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:07:24.983 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:07:24.983 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:07:24.983 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:07:24.983 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:07:24.983 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:07:24.983 --silence-noticelog disable notice level logging to stderr 00:07:24.983 00:07:24.983 Trace options: 00:07:24.983 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:24.983 setting 0 to disable trace (default 32768) 00:07:24.983 Tracepoints vary in size and can use more than one trace entry. 00:07:24.983 -e, --tpoint-group [:] 00:07:24.983 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:24.983 [2024-11-19 01:48:35.344008] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:07:24.983 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:07:24.983 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:07:24.983 bdev_raid, scheduler, all). 00:07:24.983 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:24.983 a tracepoint group. First tpoint inside a group can be enabled by 00:07:24.983 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:24.983 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:24.983 in /include/spdk_internal/trace_defs.h 00:07:24.983 00:07:24.983 Other options: 00:07:24.983 -h, --help show this usage 00:07:24.983 -v, --version print SPDK version 00:07:24.983 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:24.983 --env-context Opaque context for use of the env implementation 00:07:24.983 00:07:24.983 Application specific: 00:07:24.983 [--------- DD Options ---------] 00:07:24.983 --if Input file. Must specify either --if or --ib. 00:07:24.983 --ib Input bdev. Must specifier either --if or --ib 00:07:24.983 --of Output file. Must specify either --of or --ob. 00:07:24.983 --ob Output bdev. Must specify either --of or --ob. 00:07:24.983 --iflag Input file flags. 00:07:24.983 --oflag Output file flags. 00:07:24.983 --bs I/O unit size (default: 4096) 00:07:24.983 --qd Queue depth (default: 2) 00:07:24.983 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:24.983 --skip Skip this many I/O units at start of input. (default: 0) 00:07:24.983 --seek Skip this many I/O units at start of output. (default: 0) 00:07:24.983 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:24.983 --sparse Enable hole skipping in input target 00:07:24.983 Available iflag and oflag values: 00:07:24.983 append - append mode 00:07:24.983 direct - use direct I/O for data 00:07:24.983 directory - fail unless a directory 00:07:24.983 dsync - use synchronized I/O for data 00:07:24.983 noatime - do not update access time 00:07:24.983 noctty - do not assign controlling terminal from file 00:07:24.983 nofollow - do not follow symlinks 00:07:24.983 nonblock - use non-blocking I/O 00:07:24.983 sync - use synchronized I/O for data and metadata 00:07:24.983 01:48:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:07:24.983 01:48:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:24.983 01:48:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:24.983 01:48:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:24.983 00:07:24.983 real 0m0.075s 00:07:24.983 user 0m0.044s 00:07:24.983 sys 0m0.029s 00:07:24.983 01:48:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.983 01:48:35 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:24.983 ************************************ 00:07:24.983 END TEST dd_invalid_arguments 00:07:24.983 ************************************ 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:24.984 ************************************ 00:07:24.984 START TEST dd_double_input 00:07:24.984 ************************************ 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:24.984 [2024-11-19 01:48:35.481148] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:24.984 00:07:24.984 real 0m0.081s 00:07:24.984 user 0m0.048s 00:07:24.984 sys 0m0.031s 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:24.984 ************************************ 00:07:24.984 END TEST dd_double_input 00:07:24.984 ************************************ 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:24.984 ************************************ 00:07:24.984 START TEST dd_double_output 00:07:24.984 ************************************ 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:24.984 01:48:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:25.243 [2024-11-19 01:48:35.611037] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:25.244 00:07:25.244 real 0m0.067s 00:07:25.244 user 0m0.036s 00:07:25.244 sys 0m0.030s 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:25.244 ************************************ 00:07:25.244 END TEST dd_double_output 00:07:25.244 ************************************ 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:25.244 ************************************ 00:07:25.244 START TEST dd_no_input 00:07:25.244 ************************************ 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:25.244 [2024-11-19 01:48:35.736220] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:25.244 00:07:25.244 real 0m0.075s 00:07:25.244 user 0m0.043s 00:07:25.244 sys 0m0.030s 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.244 ************************************ 00:07:25.244 END TEST dd_no_input 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:25.244 ************************************ 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:25.244 ************************************ 00:07:25.244 START TEST dd_no_output 00:07:25.244 ************************************ 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:25.244 01:48:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:25.503 [2024-11-19 01:48:35.868875] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:07:25.503 01:48:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:07:25.503 01:48:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:25.503 01:48:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:25.503 01:48:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:25.503 00:07:25.503 real 0m0.079s 00:07:25.503 user 0m0.055s 00:07:25.503 sys 0m0.023s 00:07:25.503 01:48:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.503 01:48:35 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:25.503 ************************************ 00:07:25.503 END TEST dd_no_output 00:07:25.503 ************************************ 00:07:25.503 01:48:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:25.503 01:48:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.503 01:48:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.503 01:48:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:25.503 ************************************ 00:07:25.503 START TEST dd_wrong_blocksize 00:07:25.503 ************************************ 00:07:25.503 01:48:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:07:25.503 01:48:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:25.503 01:48:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:07:25.503 01:48:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:25.503 01:48:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.503 01:48:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.503 01:48:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.503 01:48:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.503 01:48:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.503 01:48:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.503 01:48:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.503 01:48:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:25.503 01:48:35 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:25.503 [2024-11-19 01:48:35.991271] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:07:25.503 01:48:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:07:25.503 01:48:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:25.503 01:48:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:25.503 01:48:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:25.503 00:07:25.503 real 0m0.061s 00:07:25.503 user 0m0.037s 00:07:25.503 sys 0m0.022s 00:07:25.503 01:48:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.503 01:48:36 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:25.503 ************************************ 00:07:25.503 END TEST dd_wrong_blocksize 00:07:25.503 ************************************ 00:07:25.503 01:48:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:25.503 01:48:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.503 01:48:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.503 01:48:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:25.503 ************************************ 00:07:25.503 START TEST dd_smaller_blocksize 00:07:25.503 ************************************ 00:07:25.503 01:48:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:07:25.503 01:48:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:25.503 01:48:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:07:25.503 01:48:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:25.503 01:48:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.503 01:48:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.503 01:48:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.503 01:48:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.503 01:48:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.503 01:48:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.503 01:48:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.503 01:48:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:25.503 01:48:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:25.503 [2024-11-19 01:48:36.116645] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:25.503 [2024-11-19 01:48:36.116779] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73690 ] 00:07:25.762 [2024-11-19 01:48:36.262201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.762 [2024-11-19 01:48:36.281835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.762 [2024-11-19 01:48:36.308973] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:25.762 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:25.762 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:25.762 [2024-11-19 01:48:36.324610] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:25.762 [2024-11-19 01:48:36.324641] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:26.021 [2024-11-19 01:48:36.382836] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:26.021 00:07:26.021 real 0m0.373s 00:07:26.021 user 0m0.174s 00:07:26.021 sys 0m0.095s 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:26.021 ************************************ 00:07:26.021 END TEST dd_smaller_blocksize 00:07:26.021 ************************************ 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:26.021 ************************************ 00:07:26.021 START TEST dd_invalid_count 00:07:26.021 ************************************ 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:26.021 [2024-11-19 01:48:36.548528] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:26.021 00:07:26.021 real 0m0.077s 00:07:26.021 user 0m0.045s 00:07:26.021 sys 0m0.031s 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:26.021 ************************************ 00:07:26.021 END TEST dd_invalid_count 00:07:26.021 ************************************ 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:26.021 ************************************ 00:07:26.021 START TEST dd_invalid_oflag 00:07:26.021 ************************************ 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:26.021 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:26.280 [2024-11-19 01:48:36.684169] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:07:26.280 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:07:26.280 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:26.280 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:26.280 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:26.280 00:07:26.280 real 0m0.078s 00:07:26.280 user 0m0.051s 00:07:26.280 sys 0m0.026s 00:07:26.280 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.280 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:26.280 ************************************ 00:07:26.280 END TEST dd_invalid_oflag 00:07:26.280 ************************************ 00:07:26.280 01:48:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:07:26.280 01:48:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:26.280 01:48:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.280 01:48:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:26.280 ************************************ 00:07:26.280 START TEST dd_invalid_iflag 00:07:26.280 ************************************ 00:07:26.280 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:07:26.280 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:26.280 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:07:26.280 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:26.280 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.280 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.280 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.280 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.280 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.280 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.280 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.280 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:26.280 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:26.280 [2024-11-19 01:48:36.821796] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:07:26.280 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:07:26.280 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:26.280 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:26.280 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:26.280 00:07:26.280 real 0m0.080s 00:07:26.280 user 0m0.049s 00:07:26.280 sys 0m0.029s 00:07:26.280 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.280 01:48:36 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:26.280 ************************************ 00:07:26.280 END TEST dd_invalid_iflag 00:07:26.280 ************************************ 00:07:26.280 01:48:36 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:07:26.280 01:48:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:26.280 01:48:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.280 01:48:36 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:26.540 ************************************ 00:07:26.540 START TEST dd_unknown_flag 00:07:26.540 ************************************ 00:07:26.540 01:48:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:07:26.540 01:48:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:26.540 01:48:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:07:26.540 01:48:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:26.540 01:48:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.540 01:48:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.540 01:48:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.540 01:48:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.540 01:48:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.540 01:48:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.540 01:48:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.540 01:48:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:26.540 01:48:36 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:26.540 [2024-11-19 01:48:36.961627] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:26.540 [2024-11-19 01:48:36.961731] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73776 ] 00:07:26.540 [2024-11-19 01:48:37.114820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.540 [2024-11-19 01:48:37.140365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.801 [2024-11-19 01:48:37.174714] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.801 [2024-11-19 01:48:37.192557] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:26.801 [2024-11-19 01:48:37.192653] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:26.801 [2024-11-19 01:48:37.192715] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:26.801 [2024-11-19 01:48:37.192732] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:26.802 [2024-11-19 01:48:37.193016] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:26.802 [2024-11-19 01:48:37.193037] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:26.802 [2024-11-19 01:48:37.193087] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:26.802 [2024-11-19 01:48:37.193099] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:26.802 [2024-11-19 01:48:37.260644] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:26.802 01:48:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:07:26.802 01:48:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:26.802 01:48:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:07:26.802 01:48:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:07:26.802 01:48:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:07:26.802 01:48:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:26.802 00:07:26.802 real 0m0.418s 00:07:26.802 user 0m0.201s 00:07:26.802 sys 0m0.120s 00:07:26.802 01:48:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.802 01:48:37 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:07:26.802 ************************************ 00:07:26.802 END TEST dd_unknown_flag 00:07:26.802 ************************************ 00:07:26.802 01:48:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:07:26.802 01:48:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:26.802 01:48:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.802 01:48:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:26.802 ************************************ 00:07:26.802 START TEST dd_invalid_json 00:07:26.802 ************************************ 00:07:26.802 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:07:26.802 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:26.802 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:07:26.802 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:07:26.802 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:26.802 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.802 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.802 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.802 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.802 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.802 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.802 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.802 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:26.802 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:27.062 [2024-11-19 01:48:37.442411] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:27.062 [2024-11-19 01:48:37.442639] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73805 ] 00:07:27.062 [2024-11-19 01:48:37.598434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.062 [2024-11-19 01:48:37.624181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.062 [2024-11-19 01:48:37.624306] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:07:27.062 [2024-11-19 01:48:37.624327] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:27.062 [2024-11-19 01:48:37.624339] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:27.062 [2024-11-19 01:48:37.624391] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:27.322 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:07:27.322 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:27.322 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:07:27.322 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:07:27.322 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:07:27.322 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:27.322 00:07:27.322 real 0m0.306s 00:07:27.322 user 0m0.137s 00:07:27.322 sys 0m0.065s 00:07:27.322 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.322 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:07:27.322 ************************************ 00:07:27.322 END TEST dd_invalid_json 00:07:27.322 ************************************ 00:07:27.322 01:48:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:07:27.322 01:48:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.322 01:48:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.322 01:48:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:27.322 ************************************ 00:07:27.322 START TEST dd_invalid_seek 00:07:27.322 ************************************ 00:07:27.322 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:07:27.322 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:27.322 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:27.322 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:07:27.322 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:27.322 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:27.322 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:07:27.322 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:27.322 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:07:27.322 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:07:27.322 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:27.322 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:07:27.322 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.322 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:07:27.322 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.322 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.322 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.322 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.322 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.322 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.322 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:27.322 01:48:37 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:27.322 [2024-11-19 01:48:37.800315] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:27.322 [2024-11-19 01:48:37.800443] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73834 ] 00:07:27.322 { 00:07:27.322 "subsystems": [ 00:07:27.322 { 00:07:27.322 "subsystem": "bdev", 00:07:27.322 "config": [ 00:07:27.322 { 00:07:27.322 "params": { 00:07:27.322 "block_size": 512, 00:07:27.322 "num_blocks": 512, 00:07:27.322 "name": "malloc0" 00:07:27.322 }, 00:07:27.322 "method": "bdev_malloc_create" 00:07:27.322 }, 00:07:27.322 { 00:07:27.322 "params": { 00:07:27.322 "block_size": 512, 00:07:27.322 "num_blocks": 512, 00:07:27.322 "name": "malloc1" 00:07:27.322 }, 00:07:27.322 "method": "bdev_malloc_create" 00:07:27.322 }, 00:07:27.322 { 00:07:27.322 "method": "bdev_wait_for_examine" 00:07:27.322 } 00:07:27.322 ] 00:07:27.322 } 00:07:27.322 ] 00:07:27.322 } 00:07:27.582 [2024-11-19 01:48:37.954478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.582 [2024-11-19 01:48:37.981836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.582 [2024-11-19 01:48:38.017428] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:27.582 [2024-11-19 01:48:38.062559] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:07:27.582 [2024-11-19 01:48:38.062666] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:27.582 [2024-11-19 01:48:38.134813] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:27.582 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:07:27.582 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:27.582 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:07:27.582 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:07:27.582 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:07:27.582 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:27.582 00:07:27.582 real 0m0.448s 00:07:27.582 user 0m0.287s 00:07:27.582 sys 0m0.125s 00:07:27.582 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.582 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:07:27.582 ************************************ 00:07:27.582 END TEST dd_invalid_seek 00:07:27.582 ************************************ 00:07:27.841 01:48:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:07:27.841 01:48:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.841 01:48:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.841 01:48:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:27.841 ************************************ 00:07:27.841 START TEST dd_invalid_skip 00:07:27.841 ************************************ 00:07:27.841 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:07:27.841 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:27.842 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:27.842 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:07:27.842 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:27.842 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:27.842 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:07:27.842 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:27.842 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:07:27.842 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:27.842 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:07:27.842 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.842 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:07:27.842 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:07:27.842 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.842 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.842 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.842 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.842 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:27.842 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:27.842 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:27.842 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:27.842 { 00:07:27.842 "subsystems": [ 00:07:27.842 { 00:07:27.842 "subsystem": "bdev", 00:07:27.842 "config": [ 00:07:27.842 { 00:07:27.842 "params": { 00:07:27.842 "block_size": 512, 00:07:27.842 "num_blocks": 512, 00:07:27.842 "name": "malloc0" 00:07:27.842 }, 00:07:27.842 "method": "bdev_malloc_create" 00:07:27.842 }, 00:07:27.842 { 00:07:27.842 "params": { 00:07:27.842 "block_size": 512, 00:07:27.842 "num_blocks": 512, 00:07:27.842 "name": "malloc1" 00:07:27.842 }, 00:07:27.842 "method": "bdev_malloc_create" 00:07:27.842 }, 00:07:27.842 { 00:07:27.842 "method": "bdev_wait_for_examine" 00:07:27.842 } 00:07:27.842 ] 00:07:27.842 } 00:07:27.842 ] 00:07:27.842 } 00:07:27.842 [2024-11-19 01:48:38.304213] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:27.842 [2024-11-19 01:48:38.304326] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73868 ] 00:07:27.842 [2024-11-19 01:48:38.450207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.101 [2024-11-19 01:48:38.471722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.101 [2024-11-19 01:48:38.500654] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.102 [2024-11-19 01:48:38.541199] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:07:28.102 [2024-11-19 01:48:38.541265] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:28.102 [2024-11-19 01:48:38.599512] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:28.102 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:07:28.102 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:28.102 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:07:28.102 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:07:28.102 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:07:28.102 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:28.102 00:07:28.102 real 0m0.400s 00:07:28.102 user 0m0.252s 00:07:28.102 sys 0m0.102s 00:07:28.102 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.102 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:07:28.102 ************************************ 00:07:28.102 END TEST dd_invalid_skip 00:07:28.102 ************************************ 00:07:28.102 01:48:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:07:28.102 01:48:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.102 01:48:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.102 01:48:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:28.102 ************************************ 00:07:28.102 START TEST dd_invalid_input_count 00:07:28.102 ************************************ 00:07:28.102 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:07:28.102 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:28.102 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:28.102 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:07:28.102 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:28.102 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:28.102 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:07:28.102 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:28.102 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:07:28.102 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:07:28.102 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:28.102 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.102 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:07:28.102 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:07:28.102 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.102 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.102 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.102 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.102 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.102 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.102 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:28.102 01:48:38 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:28.361 { 00:07:28.361 "subsystems": [ 00:07:28.361 { 00:07:28.361 "subsystem": "bdev", 00:07:28.361 "config": [ 00:07:28.361 { 00:07:28.361 "params": { 00:07:28.361 "block_size": 512, 00:07:28.361 "num_blocks": 512, 00:07:28.361 "name": "malloc0" 00:07:28.361 }, 00:07:28.361 "method": "bdev_malloc_create" 00:07:28.361 }, 00:07:28.361 { 00:07:28.361 "params": { 00:07:28.361 "block_size": 512, 00:07:28.361 "num_blocks": 512, 00:07:28.361 "name": "malloc1" 00:07:28.361 }, 00:07:28.361 "method": "bdev_malloc_create" 00:07:28.361 }, 00:07:28.361 { 00:07:28.361 "method": "bdev_wait_for_examine" 00:07:28.361 } 00:07:28.361 ] 00:07:28.361 } 00:07:28.361 ] 00:07:28.361 } 00:07:28.361 [2024-11-19 01:48:38.762645] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:28.361 [2024-11-19 01:48:38.762775] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73896 ] 00:07:28.361 [2024-11-19 01:48:38.906778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.361 [2024-11-19 01:48:38.927137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.361 [2024-11-19 01:48:38.954760] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.628 [2024-11-19 01:48:38.995503] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:07:28.629 [2024-11-19 01:48:38.995779] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:28.629 [2024-11-19 01:48:39.066049] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:28.629 01:48:39 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:07:28.629 01:48:39 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:28.629 01:48:39 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:07:28.629 01:48:39 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:07:28.629 01:48:39 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:07:28.629 01:48:39 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:28.629 00:07:28.629 real 0m0.419s 00:07:28.629 user 0m0.253s 00:07:28.629 sys 0m0.127s 00:07:28.629 01:48:39 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.629 01:48:39 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:07:28.629 ************************************ 00:07:28.629 END TEST dd_invalid_input_count 00:07:28.629 ************************************ 00:07:28.629 01:48:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:07:28.629 01:48:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.629 01:48:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.629 01:48:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:28.629 ************************************ 00:07:28.629 START TEST dd_invalid_output_count 00:07:28.629 ************************************ 00:07:28.629 01:48:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:07:28.629 01:48:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:28.629 01:48:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:28.629 01:48:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:07:28.629 01:48:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:28.629 01:48:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:07:28.629 01:48:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:07:28.629 01:48:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:28.629 01:48:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:07:28.629 01:48:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.629 01:48:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:07:28.629 01:48:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.629 01:48:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.629 01:48:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.629 01:48:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.629 01:48:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.629 01:48:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.629 01:48:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:28.629 01:48:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:28.629 { 00:07:28.629 "subsystems": [ 00:07:28.629 { 00:07:28.629 "subsystem": "bdev", 00:07:28.629 "config": [ 00:07:28.629 { 00:07:28.629 "params": { 00:07:28.629 "block_size": 512, 00:07:28.629 "num_blocks": 512, 00:07:28.629 "name": "malloc0" 00:07:28.629 }, 00:07:28.629 "method": "bdev_malloc_create" 00:07:28.629 }, 00:07:28.629 { 00:07:28.629 "method": "bdev_wait_for_examine" 00:07:28.629 } 00:07:28.629 ] 00:07:28.629 } 00:07:28.629 ] 00:07:28.629 } 00:07:28.629 [2024-11-19 01:48:39.236156] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:28.629 [2024-11-19 01:48:39.236291] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73935 ] 00:07:28.898 [2024-11-19 01:48:39.382972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.898 [2024-11-19 01:48:39.402758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.898 [2024-11-19 01:48:39.431190] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.898 [2024-11-19 01:48:39.465101] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:07:28.898 [2024-11-19 01:48:39.465176] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:29.157 [2024-11-19 01:48:39.529427] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:29.157 01:48:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:07:29.157 01:48:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:29.157 01:48:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:07:29.157 01:48:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:07:29.157 01:48:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:07:29.157 01:48:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:29.157 00:07:29.157 real 0m0.404s 00:07:29.157 user 0m0.250s 00:07:29.157 sys 0m0.109s 00:07:29.157 01:48:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.157 01:48:39 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:07:29.157 ************************************ 00:07:29.157 END TEST dd_invalid_output_count 00:07:29.157 ************************************ 00:07:29.157 01:48:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:07:29.157 01:48:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:29.157 01:48:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.157 01:48:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:29.157 ************************************ 00:07:29.157 START TEST dd_bs_not_multiple 00:07:29.157 ************************************ 00:07:29.157 01:48:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:07:29.157 01:48:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:29.157 01:48:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:29.157 01:48:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:07:29.157 01:48:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:29.157 01:48:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:29.157 01:48:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:07:29.157 01:48:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:29.157 01:48:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:07:29.157 01:48:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:07:29.157 01:48:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:29.157 01:48:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.157 01:48:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:07:29.157 01:48:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:07:29.157 01:48:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.157 01:48:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.158 01:48:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.158 01:48:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.158 01:48:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:29.158 01:48:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:29.158 01:48:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:29.158 01:48:39 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:29.158 { 00:07:29.158 "subsystems": [ 00:07:29.158 { 00:07:29.158 "subsystem": "bdev", 00:07:29.158 "config": [ 00:07:29.158 { 00:07:29.158 "params": { 00:07:29.158 "block_size": 512, 00:07:29.158 "num_blocks": 512, 00:07:29.158 "name": "malloc0" 00:07:29.158 }, 00:07:29.158 "method": "bdev_malloc_create" 00:07:29.158 }, 00:07:29.158 { 00:07:29.158 "params": { 00:07:29.158 "block_size": 512, 00:07:29.158 "num_blocks": 512, 00:07:29.158 "name": "malloc1" 00:07:29.158 }, 00:07:29.158 "method": "bdev_malloc_create" 00:07:29.158 }, 00:07:29.158 { 00:07:29.158 "method": "bdev_wait_for_examine" 00:07:29.158 } 00:07:29.158 ] 00:07:29.158 } 00:07:29.158 ] 00:07:29.158 } 00:07:29.158 [2024-11-19 01:48:39.701065] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:29.158 [2024-11-19 01:48:39.701181] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73961 ] 00:07:29.417 [2024-11-19 01:48:39.848702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.417 [2024-11-19 01:48:39.869601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.417 [2024-11-19 01:48:39.902479] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:29.417 [2024-11-19 01:48:39.945444] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:07:29.417 [2024-11-19 01:48:39.945574] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:29.417 [2024-11-19 01:48:40.011725] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:29.676 01:48:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:07:29.676 01:48:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:29.676 01:48:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:07:29.676 01:48:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:07:29.676 01:48:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:07:29.676 01:48:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:29.676 00:07:29.676 real 0m0.426s 00:07:29.676 user 0m0.265s 00:07:29.676 sys 0m0.122s 00:07:29.676 01:48:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.676 01:48:40 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:07:29.676 ************************************ 00:07:29.676 END TEST dd_bs_not_multiple 00:07:29.676 ************************************ 00:07:29.676 00:07:29.676 real 0m5.006s 00:07:29.676 user 0m2.611s 00:07:29.676 sys 0m1.778s 00:07:29.676 01:48:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.676 ************************************ 00:07:29.676 01:48:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:29.676 END TEST spdk_dd_negative 00:07:29.677 ************************************ 00:07:29.677 00:07:29.677 real 1m0.355s 00:07:29.677 user 0m38.063s 00:07:29.677 sys 0m25.567s 00:07:29.677 01:48:40 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.677 01:48:40 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:29.677 ************************************ 00:07:29.677 END TEST spdk_dd 00:07:29.677 ************************************ 00:07:29.677 01:48:40 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:29.677 01:48:40 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:29.677 01:48:40 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:29.677 01:48:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:29.677 01:48:40 -- common/autotest_common.sh@10 -- # set +x 00:07:29.677 01:48:40 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:29.677 01:48:40 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:29.677 01:48:40 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:29.677 01:48:40 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:29.677 01:48:40 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:29.677 01:48:40 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:29.677 01:48:40 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:29.677 01:48:40 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:29.677 01:48:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.677 01:48:40 -- common/autotest_common.sh@10 -- # set +x 00:07:29.677 ************************************ 00:07:29.677 START TEST nvmf_tcp 00:07:29.677 ************************************ 00:07:29.677 01:48:40 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:29.936 * Looking for test storage... 00:07:29.936 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:29.936 01:48:40 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:29.936 01:48:40 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:29.936 01:48:40 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:29.936 01:48:40 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:29.936 01:48:40 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.936 01:48:40 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.936 01:48:40 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.936 01:48:40 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.936 01:48:40 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.936 01:48:40 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.936 01:48:40 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.936 01:48:40 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.936 01:48:40 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.936 01:48:40 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.936 01:48:40 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.936 01:48:40 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:29.936 01:48:40 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:29.936 01:48:40 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.936 01:48:40 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.936 01:48:40 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:29.936 01:48:40 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:29.936 01:48:40 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.936 01:48:40 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:29.936 01:48:40 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.936 01:48:40 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:29.936 01:48:40 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:29.936 01:48:40 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.936 01:48:40 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:29.936 01:48:40 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.936 01:48:40 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.936 01:48:40 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.936 01:48:40 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:29.936 01:48:40 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.937 01:48:40 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:29.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.937 --rc genhtml_branch_coverage=1 00:07:29.937 --rc genhtml_function_coverage=1 00:07:29.937 --rc genhtml_legend=1 00:07:29.937 --rc geninfo_all_blocks=1 00:07:29.937 --rc geninfo_unexecuted_blocks=1 00:07:29.937 00:07:29.937 ' 00:07:29.937 01:48:40 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:29.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.937 --rc genhtml_branch_coverage=1 00:07:29.937 --rc genhtml_function_coverage=1 00:07:29.937 --rc genhtml_legend=1 00:07:29.937 --rc geninfo_all_blocks=1 00:07:29.937 --rc geninfo_unexecuted_blocks=1 00:07:29.937 00:07:29.937 ' 00:07:29.937 01:48:40 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:29.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.937 --rc genhtml_branch_coverage=1 00:07:29.937 --rc genhtml_function_coverage=1 00:07:29.937 --rc genhtml_legend=1 00:07:29.937 --rc geninfo_all_blocks=1 00:07:29.937 --rc geninfo_unexecuted_blocks=1 00:07:29.937 00:07:29.937 ' 00:07:29.937 01:48:40 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:29.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.937 --rc genhtml_branch_coverage=1 00:07:29.937 --rc genhtml_function_coverage=1 00:07:29.937 --rc genhtml_legend=1 00:07:29.937 --rc geninfo_all_blocks=1 00:07:29.937 --rc geninfo_unexecuted_blocks=1 00:07:29.937 00:07:29.937 ' 00:07:29.937 01:48:40 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:29.937 01:48:40 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:29.937 01:48:40 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:29.937 01:48:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:29.937 01:48:40 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.937 01:48:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:29.937 ************************************ 00:07:29.937 START TEST nvmf_target_core 00:07:29.937 ************************************ 00:07:29.937 01:48:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:29.937 * Looking for test storage... 00:07:29.937 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:29.937 01:48:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:29.937 01:48:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:07:29.937 01:48:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:30.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.197 --rc genhtml_branch_coverage=1 00:07:30.197 --rc genhtml_function_coverage=1 00:07:30.197 --rc genhtml_legend=1 00:07:30.197 --rc geninfo_all_blocks=1 00:07:30.197 --rc geninfo_unexecuted_blocks=1 00:07:30.197 00:07:30.197 ' 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:30.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.197 --rc genhtml_branch_coverage=1 00:07:30.197 --rc genhtml_function_coverage=1 00:07:30.197 --rc genhtml_legend=1 00:07:30.197 --rc geninfo_all_blocks=1 00:07:30.197 --rc geninfo_unexecuted_blocks=1 00:07:30.197 00:07:30.197 ' 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:30.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.197 --rc genhtml_branch_coverage=1 00:07:30.197 --rc genhtml_function_coverage=1 00:07:30.197 --rc genhtml_legend=1 00:07:30.197 --rc geninfo_all_blocks=1 00:07:30.197 --rc geninfo_unexecuted_blocks=1 00:07:30.197 00:07:30.197 ' 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:30.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.197 --rc genhtml_branch_coverage=1 00:07:30.197 --rc genhtml_function_coverage=1 00:07:30.197 --rc genhtml_legend=1 00:07:30.197 --rc geninfo_all_blocks=1 00:07:30.197 --rc geninfo_unexecuted_blocks=1 00:07:30.197 00:07:30.197 ' 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.197 01:48:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.198 01:48:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.198 01:48:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:30.198 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:30.198 01:48:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:30.198 01:48:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:30.198 01:48:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:30.198 01:48:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:30.198 01:48:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:30.198 01:48:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:07:30.198 01:48:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:30.198 01:48:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:30.198 01:48:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.198 01:48:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:30.198 ************************************ 00:07:30.198 START TEST nvmf_host_management 00:07:30.198 ************************************ 00:07:30.198 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:30.198 * Looking for test storage... 00:07:30.458 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:30.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.458 --rc genhtml_branch_coverage=1 00:07:30.458 --rc genhtml_function_coverage=1 00:07:30.458 --rc genhtml_legend=1 00:07:30.458 --rc geninfo_all_blocks=1 00:07:30.458 --rc geninfo_unexecuted_blocks=1 00:07:30.458 00:07:30.458 ' 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:30.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.458 --rc genhtml_branch_coverage=1 00:07:30.458 --rc genhtml_function_coverage=1 00:07:30.458 --rc genhtml_legend=1 00:07:30.458 --rc geninfo_all_blocks=1 00:07:30.458 --rc geninfo_unexecuted_blocks=1 00:07:30.458 00:07:30.458 ' 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:30.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.458 --rc genhtml_branch_coverage=1 00:07:30.458 --rc genhtml_function_coverage=1 00:07:30.458 --rc genhtml_legend=1 00:07:30.458 --rc geninfo_all_blocks=1 00:07:30.458 --rc geninfo_unexecuted_blocks=1 00:07:30.458 00:07:30.458 ' 00:07:30.458 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:30.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.458 --rc genhtml_branch_coverage=1 00:07:30.458 --rc genhtml_function_coverage=1 00:07:30.459 --rc genhtml_legend=1 00:07:30.459 --rc geninfo_all_blocks=1 00:07:30.459 --rc geninfo_unexecuted_blocks=1 00:07:30.459 00:07:30.459 ' 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:30.459 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:30.459 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:30.460 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:30.460 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:30.460 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:30.460 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:30.460 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:30.460 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:30.460 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:30.460 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:30.460 Cannot find device "nvmf_init_br" 00:07:30.460 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:30.460 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:30.460 Cannot find device "nvmf_init_br2" 00:07:30.460 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:30.460 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:30.460 Cannot find device "nvmf_tgt_br" 00:07:30.460 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:07:30.460 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:30.460 Cannot find device "nvmf_tgt_br2" 00:07:30.460 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:07:30.460 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:30.460 Cannot find device "nvmf_init_br" 00:07:30.460 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:07:30.460 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:30.460 Cannot find device "nvmf_init_br2" 00:07:30.460 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:07:30.460 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:30.460 Cannot find device "nvmf_tgt_br" 00:07:30.460 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:07:30.460 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:30.460 Cannot find device "nvmf_tgt_br2" 00:07:30.460 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:07:30.460 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:30.460 Cannot find device "nvmf_br" 00:07:30.460 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:07:30.460 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:30.460 Cannot find device "nvmf_init_if" 00:07:30.460 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:07:30.460 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:30.719 Cannot find device "nvmf_init_if2" 00:07:30.719 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:07:30.719 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:30.719 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:30.719 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:07:30.719 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:30.719 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:30.719 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:07:30.719 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:30.719 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:30.719 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:30.719 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:30.719 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:30.719 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:30.719 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:30.719 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:30.719 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:30.719 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:30.719 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:30.719 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:30.719 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:30.719 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:30.719 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:30.719 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:30.719 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:30.719 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:30.719 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:30.719 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:30.719 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:30.719 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:30.720 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:30.720 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:30.720 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:30.979 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:30.979 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:30.979 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:30.979 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:30.979 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:30.979 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:30.979 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:30.979 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:30.979 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:30.979 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:07:30.979 00:07:30.979 --- 10.0.0.3 ping statistics --- 00:07:30.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.979 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:07:30.979 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:30.979 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:30.979 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:07:30.979 00:07:30.979 --- 10.0.0.4 ping statistics --- 00:07:30.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.979 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:07:30.979 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:30.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:30.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:07:30.979 00:07:30.979 --- 10.0.0.1 ping statistics --- 00:07:30.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.979 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:07:30.979 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:30.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:30.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:07:30.979 00:07:30.979 --- 10.0.0.2 ping statistics --- 00:07:30.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.979 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:07:30.979 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:30.979 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:07:30.979 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:30.979 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:30.979 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:30.979 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:30.979 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:30.979 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:30.979 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:30.979 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:30.979 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:30.979 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:30.979 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:30.979 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:30.979 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:30.979 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=74312 00:07:30.980 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:30.980 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 74312 00:07:30.980 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 74312 ']' 00:07:30.980 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.980 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.980 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.980 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.980 01:48:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:30.980 [2024-11-19 01:48:41.535743] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:30.980 [2024-11-19 01:48:41.536277] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.239 [2024-11-19 01:48:41.690838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:31.239 [2024-11-19 01:48:41.719366] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:31.239 [2024-11-19 01:48:41.719439] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:31.239 [2024-11-19 01:48:41.719460] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:31.239 [2024-11-19 01:48:41.719475] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:31.239 [2024-11-19 01:48:41.719488] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:31.239 [2024-11-19 01:48:41.720557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.239 [2024-11-19 01:48:41.721084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.239 [2024-11-19 01:48:41.721222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:31.239 [2024-11-19 01:48:41.721421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.239 [2024-11-19 01:48:41.774337] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.176 [2024-11-19 01:48:42.568650] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.176 Malloc0 00:07:32.176 [2024-11-19 01:48:42.644609] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=74366 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 74366 /var/tmp/bdevperf.sock 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 74366 ']' 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:32.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:32.176 { 00:07:32.176 "params": { 00:07:32.176 "name": "Nvme$subsystem", 00:07:32.176 "trtype": "$TEST_TRANSPORT", 00:07:32.176 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:32.176 "adrfam": "ipv4", 00:07:32.176 "trsvcid": "$NVMF_PORT", 00:07:32.176 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:32.176 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:32.176 "hdgst": ${hdgst:-false}, 00:07:32.176 "ddgst": ${ddgst:-false} 00:07:32.176 }, 00:07:32.176 "method": "bdev_nvme_attach_controller" 00:07:32.176 } 00:07:32.176 EOF 00:07:32.176 )") 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:32.176 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:32.176 "params": { 00:07:32.176 "name": "Nvme0", 00:07:32.176 "trtype": "tcp", 00:07:32.176 "traddr": "10.0.0.3", 00:07:32.176 "adrfam": "ipv4", 00:07:32.176 "trsvcid": "4420", 00:07:32.176 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:32.176 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:32.176 "hdgst": false, 00:07:32.176 "ddgst": false 00:07:32.176 }, 00:07:32.176 "method": "bdev_nvme_attach_controller" 00:07:32.176 }' 00:07:32.176 [2024-11-19 01:48:42.777921] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:32.176 [2024-11-19 01:48:42.778073] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74366 ] 00:07:32.435 [2024-11-19 01:48:42.942042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.435 [2024-11-19 01:48:42.974143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.435 [2024-11-19 01:48:43.022077] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.694 Running I/O for 10 seconds... 00:07:32.694 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.694 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:32.694 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:32.694 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.694 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.694 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.694 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:32.694 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:32.694 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:32.694 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:32.694 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:32.694 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:32.694 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:32.694 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:32.694 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:32.694 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.694 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:32.694 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.694 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.694 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:32.694 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:32.694 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:32.982 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:32.982 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:32.982 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:32.982 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.982 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.982 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:32.982 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.982 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:07:32.982 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:07:32.982 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:32.982 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:32.982 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:32.982 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:32.982 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.982 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.982 [2024-11-19 01:48:43.577553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.982 [2024-11-19 01:48:43.577619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.982 [2024-11-19 01:48:43.577646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.982 [2024-11-19 01:48:43.577658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.982 [2024-11-19 01:48:43.577672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.982 [2024-11-19 01:48:43.577683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.982 [2024-11-19 01:48:43.577695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.982 [2024-11-19 01:48:43.577705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.982 [2024-11-19 01:48:43.577717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.982 [2024-11-19 01:48:43.577727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.982 [2024-11-19 01:48:43.577740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.982 [2024-11-19 01:48:43.577749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.982 [2024-11-19 01:48:43.577761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.983 [2024-11-19 01:48:43.577771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.983 [2024-11-19 01:48:43.577784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.983 [2024-11-19 01:48:43.577794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.983 [2024-11-19 01:48:43.577806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.983 [2024-11-19 01:48:43.577815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.983 [2024-11-19 01:48:43.577828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.983 [2024-11-19 01:48:43.577837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.983 [2024-11-19 01:48:43.577849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.983 [2024-11-19 01:48:43.577859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.983 [2024-11-19 01:48:43.577880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.983 [2024-11-19 01:48:43.577890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.983 [2024-11-19 01:48:43.577902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.983 [2024-11-19 01:48:43.577950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.983 [2024-11-19 01:48:43.577996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.983 [2024-11-19 01:48:43.578007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.983 [2024-11-19 01:48:43.578018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.983 [2024-11-19 01:48:43.578027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.983 [2024-11-19 01:48:43.578038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.983 [2024-11-19 01:48:43.578047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.983 [2024-11-19 01:48:43.578058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.983 [2024-11-19 01:48:43.578067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.983 [2024-11-19 01:48:43.578077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.983 [2024-11-19 01:48:43.578086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.983 [2024-11-19 01:48:43.578098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.983 [2024-11-19 01:48:43.578107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.983 [2024-11-19 01:48:43.578118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.983 [2024-11-19 01:48:43.578127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.983 [2024-11-19 01:48:43.578137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.983 [2024-11-19 01:48:43.578146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.983 [2024-11-19 01:48:43.578157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.983 [2024-11-19 01:48:43.578166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.983 [2024-11-19 01:48:43.578176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.983 [2024-11-19 01:48:43.578185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.983 [2024-11-19 01:48:43.578196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.983 [2024-11-19 01:48:43.578205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.983 [2024-11-19 01:48:43.578216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.983 [2024-11-19 01:48:43.578225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.983 [2024-11-19 01:48:43.578236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.983 [2024-11-19 01:48:43.578245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.983 [2024-11-19 01:48:43.578256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.983 [2024-11-19 01:48:43.578265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.983 [2024-11-19 01:48:43.578276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.983 [2024-11-19 01:48:43.578284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.983 [2024-11-19 01:48:43.578296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.983 [2024-11-19 01:48:43.578305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.983 [2024-11-19 01:48:43.578322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.983 [2024-11-19 01:48:43.578332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.983 [2024-11-19 01:48:43.578343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.983 [2024-11-19 01:48:43.578352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.983 [2024-11-19 01:48:43.578363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.983 [2024-11-19 01:48:43.578372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.983 [2024-11-19 01:48:43.578383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.983 [2024-11-19 01:48:43.578391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.983 [2024-11-19 01:48:43.578402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.983 [2024-11-19 01:48:43.578412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.983 [2024-11-19 01:48:43.578424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.983 [2024-11-19 01:48:43.578433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.983 [2024-11-19 01:48:43.578444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.983 [2024-11-19 01:48:43.578453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.984 [2024-11-19 01:48:43.578464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.984 [2024-11-19 01:48:43.578473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.984 [2024-11-19 01:48:43.578484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.984 [2024-11-19 01:48:43.578494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.984 [2024-11-19 01:48:43.578535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.984 [2024-11-19 01:48:43.578562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.984 [2024-11-19 01:48:43.578574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.984 [2024-11-19 01:48:43.578584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.984 [2024-11-19 01:48:43.578596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.984 [2024-11-19 01:48:43.578606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.984 [2024-11-19 01:48:43.578618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.984 [2024-11-19 01:48:43.578628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.984 [2024-11-19 01:48:43.578640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.984 [2024-11-19 01:48:43.578650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.984 [2024-11-19 01:48:43.578662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.984 [2024-11-19 01:48:43.578672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.984 [2024-11-19 01:48:43.578684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.984 [2024-11-19 01:48:43.578694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.984 [2024-11-19 01:48:43.578714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.984 [2024-11-19 01:48:43.578725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.984 [2024-11-19 01:48:43.578736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.984 [2024-11-19 01:48:43.578746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.984 [2024-11-19 01:48:43.578758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.984 [2024-11-19 01:48:43.578768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.984 [2024-11-19 01:48:43.578779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.984 [2024-11-19 01:48:43.578789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.984 [2024-11-19 01:48:43.578801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.984 [2024-11-19 01:48:43.578811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.984 [2024-11-19 01:48:43.578824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.984 [2024-11-19 01:48:43.578834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.984 [2024-11-19 01:48:43.578846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.984 [2024-11-19 01:48:43.578856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.984 [2024-11-19 01:48:43.578882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.984 [2024-11-19 01:48:43.578906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.984 [2024-11-19 01:48:43.578932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.984 [2024-11-19 01:48:43.578941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.984 [2024-11-19 01:48:43.578952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.984 [2024-11-19 01:48:43.578961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.984 [2024-11-19 01:48:43.578971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.984 [2024-11-19 01:48:43.578980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.984 [2024-11-19 01:48:43.578990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.984 [2024-11-19 01:48:43.578999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.984 [2024-11-19 01:48:43.579010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.984 [2024-11-19 01:48:43.579019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.984 [2024-11-19 01:48:43.579029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.984 [2024-11-19 01:48:43.579038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.984 [2024-11-19 01:48:43.579048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.984 [2024-11-19 01:48:43.579057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.984 [2024-11-19 01:48:43.579068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.984 [2024-11-19 01:48:43.579077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.984 [2024-11-19 01:48:43.579089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.984 [2024-11-19 01:48:43.579098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.984 [2024-11-19 01:48:43.579109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.984 [2024-11-19 01:48:43.579119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.984 [2024-11-19 01:48:43.579129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:32.984 [2024-11-19 01:48:43.579138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:32.984 [2024-11-19 01:48:43.580436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:32.984 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.984 task offset: 86528 on job bdev=Nvme0n1 fails 00:07:32.984 00:07:32.984 Latency(us) 00:07:32.984 [2024-11-19T01:48:43.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:32.984 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:32.984 Job: Nvme0n1 ended in about 0.45 seconds with error 00:07:32.984 Verification LBA range: start 0x0 length 0x400 00:07:32.984 Nvme0n1 : 0.45 1427.95 89.25 142.80 0.00 39140.91 2278.87 43849.54 00:07:32.984 [2024-11-19T01:48:43.600Z] =================================================================================================================== 00:07:32.985 [2024-11-19T01:48:43.600Z] Total : 1427.95 89.25 142.80 0.00 39140.91 2278.87 43849.54 00:07:32.985 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:32.985 [2024-11-19 01:48:43.583315] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:32.985 [2024-11-19 01:48:43.583441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cc7d0 (9): B 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.985 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:32.985 ad file descriptor 00:07:32.985 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.985 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:32.985 [2024-11-19 01:48:43.597207] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:34.366 01:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 74366 00:07:34.366 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (74366) - No such process 00:07:34.366 01:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:34.366 01:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:34.366 01:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:34.366 01:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:34.366 01:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:34.366 01:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:34.366 01:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:34.366 01:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:34.366 { 00:07:34.366 "params": { 00:07:34.366 "name": "Nvme$subsystem", 00:07:34.366 "trtype": "$TEST_TRANSPORT", 00:07:34.366 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:34.366 "adrfam": "ipv4", 00:07:34.366 "trsvcid": "$NVMF_PORT", 00:07:34.366 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:34.366 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:34.366 "hdgst": ${hdgst:-false}, 00:07:34.366 "ddgst": ${ddgst:-false} 00:07:34.366 }, 00:07:34.366 "method": "bdev_nvme_attach_controller" 00:07:34.366 } 00:07:34.366 EOF 00:07:34.366 )") 00:07:34.366 01:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:34.366 01:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:34.366 01:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:34.366 01:48:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:34.366 "params": { 00:07:34.366 "name": "Nvme0", 00:07:34.366 "trtype": "tcp", 00:07:34.366 "traddr": "10.0.0.3", 00:07:34.366 "adrfam": "ipv4", 00:07:34.366 "trsvcid": "4420", 00:07:34.366 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:34.366 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:34.366 "hdgst": false, 00:07:34.366 "ddgst": false 00:07:34.366 }, 00:07:34.366 "method": "bdev_nvme_attach_controller" 00:07:34.366 }' 00:07:34.366 [2024-11-19 01:48:44.663421] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:34.366 [2024-11-19 01:48:44.663532] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74406 ] 00:07:34.366 [2024-11-19 01:48:44.815048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.366 [2024-11-19 01:48:44.839575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.366 [2024-11-19 01:48:44.881985] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:34.625 Running I/O for 1 seconds... 00:07:35.561 1536.00 IOPS, 96.00 MiB/s 00:07:35.561 Latency(us) 00:07:35.561 [2024-11-19T01:48:46.176Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:35.561 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:35.561 Verification LBA range: start 0x0 length 0x400 00:07:35.561 Nvme0n1 : 1.03 1546.74 96.67 0.00 0.00 40556.58 3708.74 37653.41 00:07:35.561 [2024-11-19T01:48:46.176Z] =================================================================================================================== 00:07:35.561 [2024-11-19T01:48:46.176Z] Total : 1546.74 96.67 0.00 0.00 40556.58 3708.74 37653.41 00:07:35.561 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:35.561 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:35.561 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:35.561 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:35.561 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:35.561 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:35.561 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:35.820 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:35.820 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:35.820 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:35.820 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:35.820 rmmod nvme_tcp 00:07:35.820 rmmod nvme_fabrics 00:07:35.820 rmmod nvme_keyring 00:07:35.820 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:35.820 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:35.821 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:35.821 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 74312 ']' 00:07:35.821 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 74312 00:07:35.821 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 74312 ']' 00:07:35.821 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 74312 00:07:35.821 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:35.821 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.821 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74312 00:07:35.821 killing process with pid 74312 00:07:35.821 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:35.821 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:35.821 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74312' 00:07:35.821 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 74312 00:07:35.821 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 74312 00:07:35.821 [2024-11-19 01:48:46.435129] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:36.080 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:36.080 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:36.080 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:36.080 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:36.080 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:36.080 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:36.080 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:36.080 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:36.080 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:36.080 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:36.080 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:36.080 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:36.080 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:36.080 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:36.080 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:36.080 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:36.080 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:36.080 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:36.080 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:36.080 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:36.080 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:36.080 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:36.339 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:36.339 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:36.339 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:36.339 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:36.339 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:07:36.339 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:36.339 00:07:36.339 real 0m6.011s 00:07:36.339 user 0m21.562s 00:07:36.339 sys 0m1.410s 00:07:36.339 ************************************ 00:07:36.339 END TEST nvmf_host_management 00:07:36.339 ************************************ 00:07:36.339 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.339 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:36.339 01:48:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:36.339 01:48:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:36.339 01:48:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.339 01:48:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:36.339 ************************************ 00:07:36.339 START TEST nvmf_lvol 00:07:36.339 ************************************ 00:07:36.339 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:36.339 * Looking for test storage... 00:07:36.339 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:36.339 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:36.339 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:07:36.339 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:36.600 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:36.600 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:36.600 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:36.600 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:36.600 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:36.600 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:36.600 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:36.600 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:36.600 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:36.600 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:36.600 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:36.600 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:36.600 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:36.600 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:36.600 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:36.600 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:36.600 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:36.600 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:36.600 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:36.600 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:36.600 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:36.600 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:36.600 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:36.600 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:36.600 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:36.600 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:36.600 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:36.600 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:36.600 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:36.600 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:36.600 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:36.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.600 --rc genhtml_branch_coverage=1 00:07:36.600 --rc genhtml_function_coverage=1 00:07:36.600 --rc genhtml_legend=1 00:07:36.600 --rc geninfo_all_blocks=1 00:07:36.600 --rc geninfo_unexecuted_blocks=1 00:07:36.600 00:07:36.600 ' 00:07:36.600 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:36.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.600 --rc genhtml_branch_coverage=1 00:07:36.600 --rc genhtml_function_coverage=1 00:07:36.600 --rc genhtml_legend=1 00:07:36.600 --rc geninfo_all_blocks=1 00:07:36.600 --rc geninfo_unexecuted_blocks=1 00:07:36.600 00:07:36.600 ' 00:07:36.600 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:36.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.600 --rc genhtml_branch_coverage=1 00:07:36.600 --rc genhtml_function_coverage=1 00:07:36.600 --rc genhtml_legend=1 00:07:36.600 --rc geninfo_all_blocks=1 00:07:36.600 --rc geninfo_unexecuted_blocks=1 00:07:36.600 00:07:36.600 ' 00:07:36.600 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:36.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.600 --rc genhtml_branch_coverage=1 00:07:36.600 --rc genhtml_function_coverage=1 00:07:36.600 --rc genhtml_legend=1 00:07:36.600 --rc geninfo_all_blocks=1 00:07:36.600 --rc geninfo_unexecuted_blocks=1 00:07:36.600 00:07:36.601 ' 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:36.601 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:36.601 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:36.601 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:36.601 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:36.601 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:36.601 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:36.601 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:36.601 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:36.601 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:36.601 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:36.601 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:36.601 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:36.601 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:36.601 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:36.601 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:36.601 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:36.601 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:36.601 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:36.601 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:36.601 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:36.601 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:36.601 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:36.601 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:36.601 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:36.601 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:36.601 Cannot find device "nvmf_init_br" 00:07:36.602 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:36.602 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:36.602 Cannot find device "nvmf_init_br2" 00:07:36.602 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:36.602 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:36.602 Cannot find device "nvmf_tgt_br" 00:07:36.602 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:07:36.602 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:36.602 Cannot find device "nvmf_tgt_br2" 00:07:36.602 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:07:36.602 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:36.602 Cannot find device "nvmf_init_br" 00:07:36.602 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:07:36.602 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:36.602 Cannot find device "nvmf_init_br2" 00:07:36.602 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:07:36.602 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:36.602 Cannot find device "nvmf_tgt_br" 00:07:36.602 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:07:36.602 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:36.602 Cannot find device "nvmf_tgt_br2" 00:07:36.602 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:07:36.602 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:36.602 Cannot find device "nvmf_br" 00:07:36.602 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:07:36.602 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:36.602 Cannot find device "nvmf_init_if" 00:07:36.602 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:07:36.602 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:36.602 Cannot find device "nvmf_init_if2" 00:07:36.602 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:07:36.602 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:36.602 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:36.602 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:07:36.602 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:36.602 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:36.602 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:07:36.602 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:36.602 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:36.602 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:36.602 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:36.602 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:36.602 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:36.602 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:36.602 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:36.602 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:36.602 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:36.861 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:36.861 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:36.862 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:36.862 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:07:36.862 00:07:36.862 --- 10.0.0.3 ping statistics --- 00:07:36.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.862 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:36.862 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:36.862 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:07:36.862 00:07:36.862 --- 10.0.0.4 ping statistics --- 00:07:36.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.862 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:36.862 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:36.862 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:07:36.862 00:07:36.862 --- 10.0.0.1 ping statistics --- 00:07:36.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.862 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:36.862 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:36.862 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:07:36.862 00:07:36.862 --- 10.0.0.2 ping statistics --- 00:07:36.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.862 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=74677 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 74677 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 74677 ']' 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.862 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:36.862 [2024-11-19 01:48:47.463354] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:36.862 [2024-11-19 01:48:47.463443] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.120 [2024-11-19 01:48:47.608208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:37.120 [2024-11-19 01:48:47.629233] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:37.120 [2024-11-19 01:48:47.629540] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:37.120 [2024-11-19 01:48:47.629820] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:37.120 [2024-11-19 01:48:47.630083] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:37.120 [2024-11-19 01:48:47.630192] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:37.120 [2024-11-19 01:48:47.631108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.120 [2024-11-19 01:48:47.631240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.120 [2024-11-19 01:48:47.631245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.120 [2024-11-19 01:48:47.663016] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.120 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.120 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:37.120 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:37.120 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:37.120 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:37.379 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:37.379 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:37.637 [2024-11-19 01:48:48.040884] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:37.637 01:48:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:37.896 01:48:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:37.896 01:48:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:38.156 01:48:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:38.156 01:48:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:38.415 01:48:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:38.674 01:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=64206c2a-10d5-4e39-8e0f-a8186799cf7e 00:07:38.674 01:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 64206c2a-10d5-4e39-8e0f-a8186799cf7e lvol 20 00:07:38.932 01:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=bd5caf67-01bd-438b-ade3-198f9e0c4b38 00:07:38.932 01:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:39.191 01:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bd5caf67-01bd-438b-ade3-198f9e0c4b38 00:07:39.449 01:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:39.706 [2024-11-19 01:48:50.236499] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:39.706 01:48:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:39.966 01:48:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=74745 00:07:39.966 01:48:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:39.966 01:48:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:41.371 01:48:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot bd5caf67-01bd-438b-ade3-198f9e0c4b38 MY_SNAPSHOT 00:07:41.371 01:48:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=c6931379-3510-4409-b473-012d3a62efda 00:07:41.371 01:48:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize bd5caf67-01bd-438b-ade3-198f9e0c4b38 30 00:07:41.644 01:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone c6931379-3510-4409-b473-012d3a62efda MY_CLONE 00:07:41.901 01:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=1231f3bb-c2dd-45f7-ba49-e0fa4efd16a3 00:07:41.901 01:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 1231f3bb-c2dd-45f7-ba49-e0fa4efd16a3 00:07:42.468 01:48:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 74745 00:07:50.587 Initializing NVMe Controllers 00:07:50.587 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:07:50.587 Controller IO queue size 128, less than required. 00:07:50.587 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:50.587 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:50.587 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:50.587 Initialization complete. Launching workers. 00:07:50.587 ======================================================== 00:07:50.587 Latency(us) 00:07:50.587 Device Information : IOPS MiB/s Average min max 00:07:50.587 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10743.20 41.97 11918.19 2316.81 54738.03 00:07:50.587 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10725.30 41.90 11933.49 1532.07 66652.44 00:07:50.587 ======================================================== 00:07:50.587 Total : 21468.50 83.86 11925.83 1532.07 66652.44 00:07:50.587 00:07:50.587 01:49:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:50.587 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete bd5caf67-01bd-438b-ade3-198f9e0c4b38 00:07:50.846 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 64206c2a-10d5-4e39-8e0f-a8186799cf7e 00:07:51.105 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:51.105 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:51.105 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:51.105 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:51.105 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:51.105 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:51.105 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:51.105 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:51.105 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:51.105 rmmod nvme_tcp 00:07:51.105 rmmod nvme_fabrics 00:07:51.105 rmmod nvme_keyring 00:07:51.364 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:51.364 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:51.364 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:51.364 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 74677 ']' 00:07:51.364 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 74677 00:07:51.364 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 74677 ']' 00:07:51.364 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 74677 00:07:51.364 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:51.364 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.364 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74677 00:07:51.364 killing process with pid 74677 00:07:51.364 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:51.364 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:51.364 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74677' 00:07:51.364 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 74677 00:07:51.364 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 74677 00:07:51.364 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:51.364 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:51.364 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:51.364 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:51.364 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:51.364 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:51.365 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:51.365 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:51.365 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:51.365 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:51.365 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:51.365 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:51.365 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:51.623 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:51.623 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:51.623 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:51.623 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:51.623 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:51.623 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:51.623 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:51.623 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:51.623 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:51.624 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:51.624 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.624 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:51.624 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.624 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:07:51.624 00:07:51.624 real 0m15.376s 00:07:51.624 user 1m3.937s 00:07:51.624 sys 0m4.150s 00:07:51.624 ************************************ 00:07:51.624 END TEST nvmf_lvol 00:07:51.624 ************************************ 00:07:51.624 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.624 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:51.624 01:49:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:51.624 01:49:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:51.624 01:49:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.624 01:49:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:51.624 ************************************ 00:07:51.624 START TEST nvmf_lvs_grow 00:07:51.624 ************************************ 00:07:51.624 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:51.884 * Looking for test storage... 00:07:51.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:51.884 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:51.884 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:07:51.884 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:51.884 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:51.884 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:51.884 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:51.884 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:51.884 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.884 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:51.884 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:51.884 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:51.884 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:51.884 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:51.884 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:51.884 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:51.884 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:51.884 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:51.884 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:51.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.885 --rc genhtml_branch_coverage=1 00:07:51.885 --rc genhtml_function_coverage=1 00:07:51.885 --rc genhtml_legend=1 00:07:51.885 --rc geninfo_all_blocks=1 00:07:51.885 --rc geninfo_unexecuted_blocks=1 00:07:51.885 00:07:51.885 ' 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:51.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.885 --rc genhtml_branch_coverage=1 00:07:51.885 --rc genhtml_function_coverage=1 00:07:51.885 --rc genhtml_legend=1 00:07:51.885 --rc geninfo_all_blocks=1 00:07:51.885 --rc geninfo_unexecuted_blocks=1 00:07:51.885 00:07:51.885 ' 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:51.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.885 --rc genhtml_branch_coverage=1 00:07:51.885 --rc genhtml_function_coverage=1 00:07:51.885 --rc genhtml_legend=1 00:07:51.885 --rc geninfo_all_blocks=1 00:07:51.885 --rc geninfo_unexecuted_blocks=1 00:07:51.885 00:07:51.885 ' 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:51.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.885 --rc genhtml_branch_coverage=1 00:07:51.885 --rc genhtml_function_coverage=1 00:07:51.885 --rc genhtml_legend=1 00:07:51.885 --rc geninfo_all_blocks=1 00:07:51.885 --rc geninfo_unexecuted_blocks=1 00:07:51.885 00:07:51.885 ' 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:51.885 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:51.885 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:51.886 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:51.886 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:51.886 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:51.886 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:51.886 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:51.886 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:51.886 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:51.886 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:51.886 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:51.886 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:51.886 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:51.886 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:51.886 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:51.886 Cannot find device "nvmf_init_br" 00:07:51.886 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:07:51.886 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:51.886 Cannot find device "nvmf_init_br2" 00:07:51.886 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:07:51.886 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:51.886 Cannot find device "nvmf_tgt_br" 00:07:51.886 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:07:51.886 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:51.886 Cannot find device "nvmf_tgt_br2" 00:07:51.886 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:07:51.886 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:51.886 Cannot find device "nvmf_init_br" 00:07:51.886 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:52.146 Cannot find device "nvmf_init_br2" 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:52.146 Cannot find device "nvmf_tgt_br" 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:52.146 Cannot find device "nvmf_tgt_br2" 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:52.146 Cannot find device "nvmf_br" 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:52.146 Cannot find device "nvmf_init_if" 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:52.146 Cannot find device "nvmf_init_if2" 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:52.146 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:52.146 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:52.146 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:52.407 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:52.407 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:52.407 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:52.407 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:52.407 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:52.407 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:52.407 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:52.407 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:52.407 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:52.407 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:07:52.407 00:07:52.407 --- 10.0.0.3 ping statistics --- 00:07:52.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.407 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:07:52.407 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:52.407 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:52.407 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:07:52.407 00:07:52.407 --- 10.0.0.4 ping statistics --- 00:07:52.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.407 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:07:52.407 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:52.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:52.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:07:52.407 00:07:52.407 --- 10.0.0.1 ping statistics --- 00:07:52.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.407 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:07:52.407 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:52.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:52.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:07:52.407 00:07:52.407 --- 10.0.0.2 ping statistics --- 00:07:52.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.407 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:07:52.407 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:52.407 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:07:52.407 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:52.407 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:52.407 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:52.407 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:52.407 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:52.407 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:52.407 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:52.407 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:52.407 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:52.407 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:52.407 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:52.407 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=75116 00:07:52.407 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:52.407 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 75116 00:07:52.407 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 75116 ']' 00:07:52.407 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.407 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.407 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.407 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.407 01:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:52.407 [2024-11-19 01:49:02.893004] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:52.407 [2024-11-19 01:49:02.893348] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.666 [2024-11-19 01:49:03.048898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.666 [2024-11-19 01:49:03.072287] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.666 [2024-11-19 01:49:03.072348] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.666 [2024-11-19 01:49:03.072366] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.666 [2024-11-19 01:49:03.072377] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.666 [2024-11-19 01:49:03.072385] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.666 [2024-11-19 01:49:03.072787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.666 [2024-11-19 01:49:03.106972] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:53.603 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.604 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:53.604 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:53.604 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:53.604 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:53.604 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.604 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:53.604 [2024-11-19 01:49:04.153862] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.604 01:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:53.604 01:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:53.604 01:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.604 01:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:53.604 ************************************ 00:07:53.604 START TEST lvs_grow_clean 00:07:53.604 ************************************ 00:07:53.604 01:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:53.604 01:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:53.604 01:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:53.604 01:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:53.604 01:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:53.604 01:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:53.604 01:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:53.604 01:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:53.604 01:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:53.604 01:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:54.171 01:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:54.172 01:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:54.172 01:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=3b181b56-2683-4169-acc2-59eb2a529e3e 00:07:54.172 01:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b181b56-2683-4169-acc2-59eb2a529e3e 00:07:54.172 01:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:54.430 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:54.430 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:54.430 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3b181b56-2683-4169-acc2-59eb2a529e3e lvol 150 00:07:54.999 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=ccb8df88-130c-4724-9243-962f2d9fc10e 00:07:54.999 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:54.999 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:54.999 [2024-11-19 01:49:05.524291] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:54.999 [2024-11-19 01:49:05.524375] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:54.999 true 00:07:54.999 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b181b56-2683-4169-acc2-59eb2a529e3e 00:07:54.999 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:55.258 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:55.258 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:55.517 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ccb8df88-130c-4724-9243-962f2d9fc10e 00:07:55.776 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:56.035 [2024-11-19 01:49:06.460804] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:56.035 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:56.294 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=75204 00:07:56.294 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:56.294 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:56.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:56.294 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 75204 /var/tmp/bdevperf.sock 00:07:56.294 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 75204 ']' 00:07:56.294 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:56.294 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.294 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:56.294 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.294 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:56.294 [2024-11-19 01:49:06.772822] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:07:56.294 [2024-11-19 01:49:06.773144] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75204 ] 00:07:56.553 [2024-11-19 01:49:06.925513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.553 [2024-11-19 01:49:06.950071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.553 [2024-11-19 01:49:06.984191] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.122 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:57.122 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:57.122 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:57.380 Nvme0n1 00:07:57.380 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:57.640 [ 00:07:57.640 { 00:07:57.640 "name": "Nvme0n1", 00:07:57.640 "aliases": [ 00:07:57.640 "ccb8df88-130c-4724-9243-962f2d9fc10e" 00:07:57.640 ], 00:07:57.640 "product_name": "NVMe disk", 00:07:57.640 "block_size": 4096, 00:07:57.640 "num_blocks": 38912, 00:07:57.640 "uuid": "ccb8df88-130c-4724-9243-962f2d9fc10e", 00:07:57.640 "numa_id": -1, 00:07:57.640 "assigned_rate_limits": { 00:07:57.640 "rw_ios_per_sec": 0, 00:07:57.640 "rw_mbytes_per_sec": 0, 00:07:57.640 "r_mbytes_per_sec": 0, 00:07:57.640 "w_mbytes_per_sec": 0 00:07:57.640 }, 00:07:57.640 "claimed": false, 00:07:57.640 "zoned": false, 00:07:57.640 "supported_io_types": { 00:07:57.640 "read": true, 00:07:57.640 "write": true, 00:07:57.640 "unmap": true, 00:07:57.640 "flush": true, 00:07:57.640 "reset": true, 00:07:57.640 "nvme_admin": true, 00:07:57.640 "nvme_io": true, 00:07:57.640 "nvme_io_md": false, 00:07:57.640 "write_zeroes": true, 00:07:57.640 "zcopy": false, 00:07:57.640 "get_zone_info": false, 00:07:57.640 "zone_management": false, 00:07:57.640 "zone_append": false, 00:07:57.640 "compare": true, 00:07:57.640 "compare_and_write": true, 00:07:57.640 "abort": true, 00:07:57.640 "seek_hole": false, 00:07:57.640 "seek_data": false, 00:07:57.640 "copy": true, 00:07:57.640 "nvme_iov_md": false 00:07:57.640 }, 00:07:57.640 "memory_domains": [ 00:07:57.640 { 00:07:57.640 "dma_device_id": "system", 00:07:57.640 "dma_device_type": 1 00:07:57.640 } 00:07:57.640 ], 00:07:57.640 "driver_specific": { 00:07:57.640 "nvme": [ 00:07:57.640 { 00:07:57.640 "trid": { 00:07:57.640 "trtype": "TCP", 00:07:57.640 "adrfam": "IPv4", 00:07:57.640 "traddr": "10.0.0.3", 00:07:57.640 "trsvcid": "4420", 00:07:57.640 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:57.640 }, 00:07:57.640 "ctrlr_data": { 00:07:57.640 "cntlid": 1, 00:07:57.640 "vendor_id": "0x8086", 00:07:57.640 "model_number": "SPDK bdev Controller", 00:07:57.640 "serial_number": "SPDK0", 00:07:57.640 "firmware_revision": "25.01", 00:07:57.640 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:57.640 "oacs": { 00:07:57.640 "security": 0, 00:07:57.640 "format": 0, 00:07:57.640 "firmware": 0, 00:07:57.640 "ns_manage": 0 00:07:57.640 }, 00:07:57.640 "multi_ctrlr": true, 00:07:57.640 "ana_reporting": false 00:07:57.640 }, 00:07:57.640 "vs": { 00:07:57.640 "nvme_version": "1.3" 00:07:57.640 }, 00:07:57.640 "ns_data": { 00:07:57.640 "id": 1, 00:07:57.640 "can_share": true 00:07:57.640 } 00:07:57.640 } 00:07:57.640 ], 00:07:57.640 "mp_policy": "active_passive" 00:07:57.640 } 00:07:57.640 } 00:07:57.640 ] 00:07:57.900 01:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=75226 00:07:57.900 01:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:57.900 01:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:57.900 Running I/O for 10 seconds... 00:07:58.836 Latency(us) 00:07:58.836 [2024-11-19T01:49:09.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.836 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.836 Nvme0n1 : 1.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:07:58.836 [2024-11-19T01:49:09.451Z] =================================================================================================================== 00:07:58.836 [2024-11-19T01:49:09.451Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:07:58.836 00:07:59.771 01:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3b181b56-2683-4169-acc2-59eb2a529e3e 00:07:59.771 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.771 Nvme0n1 : 2.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:07:59.771 [2024-11-19T01:49:10.386Z] =================================================================================================================== 00:07:59.771 [2024-11-19T01:49:10.386Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:07:59.771 00:08:00.030 true 00:08:00.288 01:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:00.288 01:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b181b56-2683-4169-acc2-59eb2a529e3e 00:08:00.546 01:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:00.546 01:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:00.546 01:49:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 75226 00:08:00.808 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.808 Nvme0n1 : 3.00 6561.67 25.63 0.00 0.00 0.00 0.00 0.00 00:08:00.808 [2024-11-19T01:49:11.423Z] =================================================================================================================== 00:08:00.808 [2024-11-19T01:49:11.423Z] Total : 6561.67 25.63 0.00 0.00 0.00 0.00 0.00 00:08:00.808 00:08:02.187 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.187 Nvme0n1 : 4.00 6508.75 25.42 0.00 0.00 0.00 0.00 0.00 00:08:02.187 [2024-11-19T01:49:12.802Z] =================================================================================================================== 00:08:02.187 [2024-11-19T01:49:12.803Z] Total : 6508.75 25.42 0.00 0.00 0.00 0.00 0.00 00:08:02.188 00:08:02.756 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.756 Nvme0n1 : 5.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:02.756 [2024-11-19T01:49:13.371Z] =================================================================================================================== 00:08:02.756 [2024-11-19T01:49:13.371Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:08:02.756 00:08:04.135 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.135 Nvme0n1 : 6.00 6455.83 25.22 0.00 0.00 0.00 0.00 0.00 00:08:04.135 [2024-11-19T01:49:14.750Z] =================================================================================================================== 00:08:04.135 [2024-11-19T01:49:14.750Z] Total : 6455.83 25.22 0.00 0.00 0.00 0.00 0.00 00:08:04.135 00:08:05.072 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.072 Nvme0n1 : 7.00 6422.57 25.09 0.00 0.00 0.00 0.00 0.00 00:08:05.072 [2024-11-19T01:49:15.687Z] =================================================================================================================== 00:08:05.072 [2024-11-19T01:49:15.687Z] Total : 6422.57 25.09 0.00 0.00 0.00 0.00 0.00 00:08:05.072 00:08:06.008 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.008 Nvme0n1 : 8.00 6397.62 24.99 0.00 0.00 0.00 0.00 0.00 00:08:06.008 [2024-11-19T01:49:16.623Z] =================================================================================================================== 00:08:06.008 [2024-11-19T01:49:16.623Z] Total : 6397.62 24.99 0.00 0.00 0.00 0.00 0.00 00:08:06.008 00:08:06.945 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.945 Nvme0n1 : 9.00 6378.22 24.91 0.00 0.00 0.00 0.00 0.00 00:08:06.945 [2024-11-19T01:49:17.560Z] =================================================================================================================== 00:08:06.945 [2024-11-19T01:49:17.560Z] Total : 6378.22 24.91 0.00 0.00 0.00 0.00 0.00 00:08:06.945 00:08:07.882 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.882 Nvme0n1 : 10.00 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:08:07.882 [2024-11-19T01:49:18.497Z] =================================================================================================================== 00:08:07.882 [2024-11-19T01:49:18.497Z] Total : 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:08:07.882 00:08:07.882 00:08:07.882 Latency(us) 00:08:07.882 [2024-11-19T01:49:18.497Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:07.882 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.882 Nvme0n1 : 10.01 6358.83 24.84 0.00 0.00 20124.69 17158.52 43134.60 00:08:07.882 [2024-11-19T01:49:18.497Z] =================================================================================================================== 00:08:07.882 [2024-11-19T01:49:18.497Z] Total : 6358.83 24.84 0.00 0.00 20124.69 17158.52 43134.60 00:08:07.882 { 00:08:07.882 "results": [ 00:08:07.882 { 00:08:07.882 "job": "Nvme0n1", 00:08:07.882 "core_mask": "0x2", 00:08:07.882 "workload": "randwrite", 00:08:07.882 "status": "finished", 00:08:07.882 "queue_depth": 128, 00:08:07.882 "io_size": 4096, 00:08:07.882 "runtime": 10.006239, 00:08:07.882 "iops": 6358.832724263332, 00:08:07.882 "mibps": 24.83919032915364, 00:08:07.882 "io_failed": 0, 00:08:07.882 "io_timeout": 0, 00:08:07.882 "avg_latency_us": 20124.691072312362, 00:08:07.882 "min_latency_us": 17158.516363636365, 00:08:07.882 "max_latency_us": 43134.60363636364 00:08:07.882 } 00:08:07.882 ], 00:08:07.882 "core_count": 1 00:08:07.882 } 00:08:07.882 01:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 75204 00:08:07.882 01:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 75204 ']' 00:08:07.882 01:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 75204 00:08:07.882 01:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:07.882 01:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:07.882 01:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75204 00:08:07.882 killing process with pid 75204 00:08:07.882 Received shutdown signal, test time was about 10.000000 seconds 00:08:07.882 00:08:07.882 Latency(us) 00:08:07.882 [2024-11-19T01:49:18.497Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:07.882 [2024-11-19T01:49:18.497Z] =================================================================================================================== 00:08:07.882 [2024-11-19T01:49:18.497Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:07.883 01:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:07.883 01:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:07.883 01:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75204' 00:08:07.883 01:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 75204 00:08:07.883 01:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 75204 00:08:08.142 01:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:08.400 01:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:08.659 01:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b181b56-2683-4169-acc2-59eb2a529e3e 00:08:08.659 01:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:08.954 01:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:08.954 01:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:08.954 01:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:08.954 [2024-11-19 01:49:19.485065] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:08.954 01:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b181b56-2683-4169-acc2-59eb2a529e3e 00:08:08.954 01:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:08.954 01:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b181b56-2683-4169-acc2-59eb2a529e3e 00:08:08.954 01:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:08.954 01:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.954 01:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:08.954 01:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.954 01:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:08.954 01:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.954 01:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:08.954 01:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:08.954 01:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b181b56-2683-4169-acc2-59eb2a529e3e 00:08:09.231 request: 00:08:09.231 { 00:08:09.231 "uuid": "3b181b56-2683-4169-acc2-59eb2a529e3e", 00:08:09.231 "method": "bdev_lvol_get_lvstores", 00:08:09.231 "req_id": 1 00:08:09.231 } 00:08:09.231 Got JSON-RPC error response 00:08:09.231 response: 00:08:09.231 { 00:08:09.231 "code": -19, 00:08:09.231 "message": "No such device" 00:08:09.231 } 00:08:09.231 01:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:09.231 01:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:09.231 01:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:09.231 01:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:09.231 01:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:09.490 aio_bdev 00:08:09.490 01:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ccb8df88-130c-4724-9243-962f2d9fc10e 00:08:09.490 01:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=ccb8df88-130c-4724-9243-962f2d9fc10e 00:08:09.490 01:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:09.490 01:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:09.490 01:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:09.490 01:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:09.490 01:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:09.749 01:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ccb8df88-130c-4724-9243-962f2d9fc10e -t 2000 00:08:10.008 [ 00:08:10.008 { 00:08:10.008 "name": "ccb8df88-130c-4724-9243-962f2d9fc10e", 00:08:10.008 "aliases": [ 00:08:10.008 "lvs/lvol" 00:08:10.008 ], 00:08:10.008 "product_name": "Logical Volume", 00:08:10.008 "block_size": 4096, 00:08:10.008 "num_blocks": 38912, 00:08:10.008 "uuid": "ccb8df88-130c-4724-9243-962f2d9fc10e", 00:08:10.008 "assigned_rate_limits": { 00:08:10.008 "rw_ios_per_sec": 0, 00:08:10.008 "rw_mbytes_per_sec": 0, 00:08:10.008 "r_mbytes_per_sec": 0, 00:08:10.008 "w_mbytes_per_sec": 0 00:08:10.008 }, 00:08:10.008 "claimed": false, 00:08:10.008 "zoned": false, 00:08:10.008 "supported_io_types": { 00:08:10.008 "read": true, 00:08:10.008 "write": true, 00:08:10.008 "unmap": true, 00:08:10.008 "flush": false, 00:08:10.008 "reset": true, 00:08:10.008 "nvme_admin": false, 00:08:10.008 "nvme_io": false, 00:08:10.008 "nvme_io_md": false, 00:08:10.008 "write_zeroes": true, 00:08:10.008 "zcopy": false, 00:08:10.008 "get_zone_info": false, 00:08:10.008 "zone_management": false, 00:08:10.008 "zone_append": false, 00:08:10.008 "compare": false, 00:08:10.008 "compare_and_write": false, 00:08:10.008 "abort": false, 00:08:10.008 "seek_hole": true, 00:08:10.008 "seek_data": true, 00:08:10.008 "copy": false, 00:08:10.008 "nvme_iov_md": false 00:08:10.008 }, 00:08:10.008 "driver_specific": { 00:08:10.008 "lvol": { 00:08:10.008 "lvol_store_uuid": "3b181b56-2683-4169-acc2-59eb2a529e3e", 00:08:10.008 "base_bdev": "aio_bdev", 00:08:10.008 "thin_provision": false, 00:08:10.008 "num_allocated_clusters": 38, 00:08:10.008 "snapshot": false, 00:08:10.008 "clone": false, 00:08:10.008 "esnap_clone": false 00:08:10.008 } 00:08:10.008 } 00:08:10.008 } 00:08:10.008 ] 00:08:10.008 01:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:10.008 01:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b181b56-2683-4169-acc2-59eb2a529e3e 00:08:10.008 01:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:10.266 01:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:10.266 01:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:10.266 01:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b181b56-2683-4169-acc2-59eb2a529e3e 00:08:10.525 01:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:10.525 01:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ccb8df88-130c-4724-9243-962f2d9fc10e 00:08:11.093 01:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3b181b56-2683-4169-acc2-59eb2a529e3e 00:08:11.351 01:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:11.352 01:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:11.920 ************************************ 00:08:11.920 END TEST lvs_grow_clean 00:08:11.920 ************************************ 00:08:11.920 00:08:11.920 real 0m18.108s 00:08:11.920 user 0m17.290s 00:08:11.920 sys 0m2.328s 00:08:11.920 01:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.920 01:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:11.920 01:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:11.920 01:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:11.920 01:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.920 01:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:11.920 ************************************ 00:08:11.920 START TEST lvs_grow_dirty 00:08:11.920 ************************************ 00:08:11.920 01:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:11.920 01:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:11.920 01:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:11.920 01:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:11.920 01:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:11.920 01:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:11.920 01:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:11.920 01:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:11.920 01:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:11.920 01:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:12.179 01:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:12.179 01:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:12.437 01:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=5ab2366f-0548-4dc8-b576-a7478c125e39 00:08:12.437 01:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ab2366f-0548-4dc8-b576-a7478c125e39 00:08:12.437 01:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:12.696 01:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:12.696 01:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:12.696 01:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5ab2366f-0548-4dc8-b576-a7478c125e39 lvol 150 00:08:12.954 01:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=08028dc2-7c33-4ae3-96af-159651ac686c 00:08:12.954 01:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:12.955 01:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:12.955 [2024-11-19 01:49:23.567286] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:12.955 [2024-11-19 01:49:23.567392] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:12.955 true 00:08:13.213 01:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:13.213 01:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ab2366f-0548-4dc8-b576-a7478c125e39 00:08:13.213 01:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:13.213 01:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:13.781 01:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 08028dc2-7c33-4ae3-96af-159651ac686c 00:08:13.781 01:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:14.039 [2024-11-19 01:49:24.535766] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:14.039 01:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:14.298 01:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=75468 00:08:14.298 01:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:14.298 01:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:14.298 01:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 75468 /var/tmp/bdevperf.sock 00:08:14.298 01:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 75468 ']' 00:08:14.298 01:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:14.298 01:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:14.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:14.298 01:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:14.298 01:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:14.298 01:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:14.298 [2024-11-19 01:49:24.827170] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:14.298 [2024-11-19 01:49:24.827282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75468 ] 00:08:14.557 [2024-11-19 01:49:24.980808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.557 [2024-11-19 01:49:25.005604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.557 [2024-11-19 01:49:25.041331] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:15.123 01:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.123 01:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:15.123 01:49:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:15.691 Nvme0n1 00:08:15.691 01:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:15.691 [ 00:08:15.691 { 00:08:15.691 "name": "Nvme0n1", 00:08:15.691 "aliases": [ 00:08:15.691 "08028dc2-7c33-4ae3-96af-159651ac686c" 00:08:15.691 ], 00:08:15.691 "product_name": "NVMe disk", 00:08:15.691 "block_size": 4096, 00:08:15.691 "num_blocks": 38912, 00:08:15.691 "uuid": "08028dc2-7c33-4ae3-96af-159651ac686c", 00:08:15.691 "numa_id": -1, 00:08:15.691 "assigned_rate_limits": { 00:08:15.691 "rw_ios_per_sec": 0, 00:08:15.691 "rw_mbytes_per_sec": 0, 00:08:15.691 "r_mbytes_per_sec": 0, 00:08:15.691 "w_mbytes_per_sec": 0 00:08:15.691 }, 00:08:15.691 "claimed": false, 00:08:15.691 "zoned": false, 00:08:15.691 "supported_io_types": { 00:08:15.691 "read": true, 00:08:15.691 "write": true, 00:08:15.691 "unmap": true, 00:08:15.691 "flush": true, 00:08:15.691 "reset": true, 00:08:15.691 "nvme_admin": true, 00:08:15.691 "nvme_io": true, 00:08:15.691 "nvme_io_md": false, 00:08:15.691 "write_zeroes": true, 00:08:15.691 "zcopy": false, 00:08:15.691 "get_zone_info": false, 00:08:15.691 "zone_management": false, 00:08:15.691 "zone_append": false, 00:08:15.691 "compare": true, 00:08:15.691 "compare_and_write": true, 00:08:15.691 "abort": true, 00:08:15.691 "seek_hole": false, 00:08:15.691 "seek_data": false, 00:08:15.691 "copy": true, 00:08:15.691 "nvme_iov_md": false 00:08:15.691 }, 00:08:15.691 "memory_domains": [ 00:08:15.691 { 00:08:15.691 "dma_device_id": "system", 00:08:15.691 "dma_device_type": 1 00:08:15.691 } 00:08:15.691 ], 00:08:15.691 "driver_specific": { 00:08:15.691 "nvme": [ 00:08:15.691 { 00:08:15.691 "trid": { 00:08:15.691 "trtype": "TCP", 00:08:15.691 "adrfam": "IPv4", 00:08:15.691 "traddr": "10.0.0.3", 00:08:15.691 "trsvcid": "4420", 00:08:15.691 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:15.691 }, 00:08:15.691 "ctrlr_data": { 00:08:15.691 "cntlid": 1, 00:08:15.691 "vendor_id": "0x8086", 00:08:15.691 "model_number": "SPDK bdev Controller", 00:08:15.691 "serial_number": "SPDK0", 00:08:15.691 "firmware_revision": "25.01", 00:08:15.691 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:15.691 "oacs": { 00:08:15.691 "security": 0, 00:08:15.691 "format": 0, 00:08:15.691 "firmware": 0, 00:08:15.691 "ns_manage": 0 00:08:15.691 }, 00:08:15.691 "multi_ctrlr": true, 00:08:15.691 "ana_reporting": false 00:08:15.691 }, 00:08:15.691 "vs": { 00:08:15.691 "nvme_version": "1.3" 00:08:15.691 }, 00:08:15.691 "ns_data": { 00:08:15.691 "id": 1, 00:08:15.691 "can_share": true 00:08:15.691 } 00:08:15.691 } 00:08:15.691 ], 00:08:15.691 "mp_policy": "active_passive" 00:08:15.691 } 00:08:15.691 } 00:08:15.691 ] 00:08:15.691 01:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=75496 00:08:15.691 01:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:15.691 01:49:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:15.950 Running I/O for 10 seconds... 00:08:16.884 Latency(us) 00:08:16.884 [2024-11-19T01:49:27.499Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.884 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.884 Nvme0n1 : 1.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:08:16.884 [2024-11-19T01:49:27.499Z] =================================================================================================================== 00:08:16.884 [2024-11-19T01:49:27.499Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:08:16.884 00:08:17.819 01:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5ab2366f-0548-4dc8-b576-a7478c125e39 00:08:17.819 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.819 Nvme0n1 : 2.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:08:17.819 [2024-11-19T01:49:28.434Z] =================================================================================================================== 00:08:17.819 [2024-11-19T01:49:28.434Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:08:17.819 00:08:18.078 true 00:08:18.078 01:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:18.078 01:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ab2366f-0548-4dc8-b576-a7478c125e39 00:08:18.338 01:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:18.338 01:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:18.338 01:49:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 75496 00:08:18.905 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.905 Nvme0n1 : 3.00 6561.67 25.63 0.00 0.00 0.00 0.00 0.00 00:08:18.905 [2024-11-19T01:49:29.520Z] =================================================================================================================== 00:08:18.905 [2024-11-19T01:49:29.520Z] Total : 6561.67 25.63 0.00 0.00 0.00 0.00 0.00 00:08:18.905 00:08:19.840 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.840 Nvme0n1 : 4.00 6482.00 25.32 0.00 0.00 0.00 0.00 0.00 00:08:19.840 [2024-11-19T01:49:30.455Z] =================================================================================================================== 00:08:19.840 [2024-11-19T01:49:30.455Z] Total : 6482.00 25.32 0.00 0.00 0.00 0.00 0.00 00:08:19.840 00:08:21.217 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.217 Nvme0n1 : 5.00 6404.80 25.02 0.00 0.00 0.00 0.00 0.00 00:08:21.217 [2024-11-19T01:49:31.832Z] =================================================================================================================== 00:08:21.217 [2024-11-19T01:49:31.832Z] Total : 6404.80 25.02 0.00 0.00 0.00 0.00 0.00 00:08:21.217 00:08:21.784 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.784 Nvme0n1 : 6.00 6374.50 24.90 0.00 0.00 0.00 0.00 0.00 00:08:21.784 [2024-11-19T01:49:32.399Z] =================================================================================================================== 00:08:21.784 [2024-11-19T01:49:32.399Z] Total : 6374.50 24.90 0.00 0.00 0.00 0.00 0.00 00:08:21.784 00:08:23.163 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.163 Nvme0n1 : 7.00 6326.14 24.71 0.00 0.00 0.00 0.00 0.00 00:08:23.163 [2024-11-19T01:49:33.778Z] =================================================================================================================== 00:08:23.163 [2024-11-19T01:49:33.778Z] Total : 6326.14 24.71 0.00 0.00 0.00 0.00 0.00 00:08:23.163 00:08:24.100 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.100 Nvme0n1 : 8.00 6297.38 24.60 0.00 0.00 0.00 0.00 0.00 00:08:24.100 [2024-11-19T01:49:34.715Z] =================================================================================================================== 00:08:24.100 [2024-11-19T01:49:34.715Z] Total : 6297.38 24.60 0.00 0.00 0.00 0.00 0.00 00:08:24.100 00:08:25.038 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.038 Nvme0n1 : 9.00 6303.22 24.62 0.00 0.00 0.00 0.00 0.00 00:08:25.038 [2024-11-19T01:49:35.653Z] =================================================================================================================== 00:08:25.038 [2024-11-19T01:49:35.653Z] Total : 6303.22 24.62 0.00 0.00 0.00 0.00 0.00 00:08:25.038 00:08:25.976 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.976 Nvme0n1 : 10.00 6295.20 24.59 0.00 0.00 0.00 0.00 0.00 00:08:25.976 [2024-11-19T01:49:36.591Z] =================================================================================================================== 00:08:25.976 [2024-11-19T01:49:36.591Z] Total : 6295.20 24.59 0.00 0.00 0.00 0.00 0.00 00:08:25.976 00:08:25.976 00:08:25.976 Latency(us) 00:08:25.976 [2024-11-19T01:49:36.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.976 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.976 Nvme0n1 : 10.00 6305.74 24.63 0.00 0.00 20293.74 11439.01 75306.82 00:08:25.976 [2024-11-19T01:49:36.592Z] =================================================================================================================== 00:08:25.977 [2024-11-19T01:49:36.592Z] Total : 6305.74 24.63 0.00 0.00 20293.74 11439.01 75306.82 00:08:25.977 { 00:08:25.977 "results": [ 00:08:25.977 { 00:08:25.977 "job": "Nvme0n1", 00:08:25.977 "core_mask": "0x2", 00:08:25.977 "workload": "randwrite", 00:08:25.977 "status": "finished", 00:08:25.977 "queue_depth": 128, 00:08:25.977 "io_size": 4096, 00:08:25.977 "runtime": 10.003579, 00:08:25.977 "iops": 6305.74317451784, 00:08:25.977 "mibps": 24.631809275460313, 00:08:25.977 "io_failed": 0, 00:08:25.977 "io_timeout": 0, 00:08:25.977 "avg_latency_us": 20293.7412062028, 00:08:25.977 "min_latency_us": 11439.01090909091, 00:08:25.977 "max_latency_us": 75306.82181818182 00:08:25.977 } 00:08:25.977 ], 00:08:25.977 "core_count": 1 00:08:25.977 } 00:08:25.977 01:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 75468 00:08:25.977 01:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 75468 ']' 00:08:25.977 01:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 75468 00:08:25.977 01:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:25.977 01:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:25.977 01:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75468 00:08:25.977 01:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:25.977 01:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:25.977 killing process with pid 75468 00:08:25.977 01:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75468' 00:08:25.977 Received shutdown signal, test time was about 10.000000 seconds 00:08:25.977 00:08:25.977 Latency(us) 00:08:25.977 [2024-11-19T01:49:36.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.977 [2024-11-19T01:49:36.592Z] =================================================================================================================== 00:08:25.977 [2024-11-19T01:49:36.592Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:25.977 01:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 75468 00:08:25.977 01:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 75468 00:08:26.236 01:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:26.495 01:49:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:26.754 01:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ab2366f-0548-4dc8-b576-a7478c125e39 00:08:26.754 01:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:27.012 01:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:27.012 01:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:27.012 01:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 75116 00:08:27.012 01:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 75116 00:08:27.012 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 75116 Killed "${NVMF_APP[@]}" "$@" 00:08:27.012 01:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:27.012 01:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:27.012 01:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:27.012 01:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:27.012 01:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:27.012 01:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=75633 00:08:27.012 01:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:27.012 01:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 75633 00:08:27.012 01:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 75633 ']' 00:08:27.012 01:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.012 01:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:27.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.012 01:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.012 01:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:27.012 01:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:27.012 [2024-11-19 01:49:37.472060] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:27.012 [2024-11-19 01:49:37.472764] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.012 [2024-11-19 01:49:37.607996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.012 [2024-11-19 01:49:37.626952] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.012 [2024-11-19 01:49:37.627042] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.012 [2024-11-19 01:49:37.627069] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:27.012 [2024-11-19 01:49:37.627077] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:27.012 [2024-11-19 01:49:37.627083] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.012 [2024-11-19 01:49:37.627388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.270 [2024-11-19 01:49:37.658590] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.270 01:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.270 01:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:27.270 01:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:27.270 01:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:27.270 01:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:27.270 01:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:27.270 01:49:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:27.528 [2024-11-19 01:49:38.026322] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:27.528 [2024-11-19 01:49:38.026699] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:27.528 [2024-11-19 01:49:38.026949] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:27.528 01:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:27.528 01:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 08028dc2-7c33-4ae3-96af-159651ac686c 00:08:27.528 01:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=08028dc2-7c33-4ae3-96af-159651ac686c 00:08:27.528 01:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:27.528 01:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:27.528 01:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:27.528 01:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:27.528 01:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:27.787 01:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 08028dc2-7c33-4ae3-96af-159651ac686c -t 2000 00:08:28.046 [ 00:08:28.046 { 00:08:28.046 "name": "08028dc2-7c33-4ae3-96af-159651ac686c", 00:08:28.046 "aliases": [ 00:08:28.046 "lvs/lvol" 00:08:28.046 ], 00:08:28.046 "product_name": "Logical Volume", 00:08:28.046 "block_size": 4096, 00:08:28.046 "num_blocks": 38912, 00:08:28.046 "uuid": "08028dc2-7c33-4ae3-96af-159651ac686c", 00:08:28.046 "assigned_rate_limits": { 00:08:28.046 "rw_ios_per_sec": 0, 00:08:28.047 "rw_mbytes_per_sec": 0, 00:08:28.047 "r_mbytes_per_sec": 0, 00:08:28.047 "w_mbytes_per_sec": 0 00:08:28.047 }, 00:08:28.047 "claimed": false, 00:08:28.047 "zoned": false, 00:08:28.047 "supported_io_types": { 00:08:28.047 "read": true, 00:08:28.047 "write": true, 00:08:28.047 "unmap": true, 00:08:28.047 "flush": false, 00:08:28.047 "reset": true, 00:08:28.047 "nvme_admin": false, 00:08:28.047 "nvme_io": false, 00:08:28.047 "nvme_io_md": false, 00:08:28.047 "write_zeroes": true, 00:08:28.047 "zcopy": false, 00:08:28.047 "get_zone_info": false, 00:08:28.047 "zone_management": false, 00:08:28.047 "zone_append": false, 00:08:28.047 "compare": false, 00:08:28.047 "compare_and_write": false, 00:08:28.047 "abort": false, 00:08:28.047 "seek_hole": true, 00:08:28.047 "seek_data": true, 00:08:28.047 "copy": false, 00:08:28.047 "nvme_iov_md": false 00:08:28.047 }, 00:08:28.047 "driver_specific": { 00:08:28.047 "lvol": { 00:08:28.047 "lvol_store_uuid": "5ab2366f-0548-4dc8-b576-a7478c125e39", 00:08:28.047 "base_bdev": "aio_bdev", 00:08:28.047 "thin_provision": false, 00:08:28.047 "num_allocated_clusters": 38, 00:08:28.047 "snapshot": false, 00:08:28.047 "clone": false, 00:08:28.047 "esnap_clone": false 00:08:28.047 } 00:08:28.047 } 00:08:28.047 } 00:08:28.047 ] 00:08:28.047 01:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:28.047 01:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ab2366f-0548-4dc8-b576-a7478c125e39 00:08:28.047 01:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:28.305 01:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:28.305 01:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ab2366f-0548-4dc8-b576-a7478c125e39 00:08:28.306 01:49:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:28.564 01:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:28.564 01:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:28.822 [2024-11-19 01:49:39.344425] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:28.822 01:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ab2366f-0548-4dc8-b576-a7478c125e39 00:08:28.822 01:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:28.822 01:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ab2366f-0548-4dc8-b576-a7478c125e39 00:08:28.822 01:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:28.822 01:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.822 01:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:28.822 01:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.822 01:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:28.822 01:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.822 01:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:28.822 01:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:28.822 01:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ab2366f-0548-4dc8-b576-a7478c125e39 00:08:29.080 request: 00:08:29.080 { 00:08:29.080 "uuid": "5ab2366f-0548-4dc8-b576-a7478c125e39", 00:08:29.080 "method": "bdev_lvol_get_lvstores", 00:08:29.081 "req_id": 1 00:08:29.081 } 00:08:29.081 Got JSON-RPC error response 00:08:29.081 response: 00:08:29.081 { 00:08:29.081 "code": -19, 00:08:29.081 "message": "No such device" 00:08:29.081 } 00:08:29.339 01:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:29.339 01:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:29.339 01:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:29.339 01:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:29.339 01:49:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:29.598 aio_bdev 00:08:29.598 01:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 08028dc2-7c33-4ae3-96af-159651ac686c 00:08:29.598 01:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=08028dc2-7c33-4ae3-96af-159651ac686c 00:08:29.598 01:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:29.598 01:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:29.598 01:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:29.598 01:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:29.598 01:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:29.857 01:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 08028dc2-7c33-4ae3-96af-159651ac686c -t 2000 00:08:30.116 [ 00:08:30.116 { 00:08:30.116 "name": "08028dc2-7c33-4ae3-96af-159651ac686c", 00:08:30.116 "aliases": [ 00:08:30.116 "lvs/lvol" 00:08:30.116 ], 00:08:30.116 "product_name": "Logical Volume", 00:08:30.116 "block_size": 4096, 00:08:30.116 "num_blocks": 38912, 00:08:30.116 "uuid": "08028dc2-7c33-4ae3-96af-159651ac686c", 00:08:30.116 "assigned_rate_limits": { 00:08:30.116 "rw_ios_per_sec": 0, 00:08:30.116 "rw_mbytes_per_sec": 0, 00:08:30.116 "r_mbytes_per_sec": 0, 00:08:30.116 "w_mbytes_per_sec": 0 00:08:30.116 }, 00:08:30.116 "claimed": false, 00:08:30.116 "zoned": false, 00:08:30.116 "supported_io_types": { 00:08:30.116 "read": true, 00:08:30.116 "write": true, 00:08:30.116 "unmap": true, 00:08:30.116 "flush": false, 00:08:30.116 "reset": true, 00:08:30.116 "nvme_admin": false, 00:08:30.116 "nvme_io": false, 00:08:30.116 "nvme_io_md": false, 00:08:30.116 "write_zeroes": true, 00:08:30.116 "zcopy": false, 00:08:30.116 "get_zone_info": false, 00:08:30.116 "zone_management": false, 00:08:30.116 "zone_append": false, 00:08:30.116 "compare": false, 00:08:30.116 "compare_and_write": false, 00:08:30.116 "abort": false, 00:08:30.116 "seek_hole": true, 00:08:30.116 "seek_data": true, 00:08:30.116 "copy": false, 00:08:30.116 "nvme_iov_md": false 00:08:30.116 }, 00:08:30.116 "driver_specific": { 00:08:30.116 "lvol": { 00:08:30.116 "lvol_store_uuid": "5ab2366f-0548-4dc8-b576-a7478c125e39", 00:08:30.116 "base_bdev": "aio_bdev", 00:08:30.116 "thin_provision": false, 00:08:30.116 "num_allocated_clusters": 38, 00:08:30.116 "snapshot": false, 00:08:30.116 "clone": false, 00:08:30.116 "esnap_clone": false 00:08:30.116 } 00:08:30.116 } 00:08:30.116 } 00:08:30.116 ] 00:08:30.116 01:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:30.117 01:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:30.117 01:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ab2366f-0548-4dc8-b576-a7478c125e39 00:08:30.376 01:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:30.376 01:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5ab2366f-0548-4dc8-b576-a7478c125e39 00:08:30.376 01:49:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:30.692 01:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:30.692 01:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 08028dc2-7c33-4ae3-96af-159651ac686c 00:08:30.974 01:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5ab2366f-0548-4dc8-b576-a7478c125e39 00:08:30.974 01:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:31.232 01:49:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:31.800 00:08:31.800 real 0m19.864s 00:08:31.800 user 0m40.628s 00:08:31.800 sys 0m9.587s 00:08:31.800 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.800 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:31.800 ************************************ 00:08:31.800 END TEST lvs_grow_dirty 00:08:31.800 ************************************ 00:08:31.800 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:31.800 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:31.800 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:31.800 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:31.800 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:31.800 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:31.800 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:31.800 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:31.800 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:31.800 nvmf_trace.0 00:08:31.800 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:31.800 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:31.800 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:31.800 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:32.059 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:32.059 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:32.059 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:32.059 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:32.059 rmmod nvme_tcp 00:08:32.059 rmmod nvme_fabrics 00:08:32.059 rmmod nvme_keyring 00:08:32.059 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:32.059 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:32.059 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:32.059 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 75633 ']' 00:08:32.059 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 75633 00:08:32.059 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 75633 ']' 00:08:32.059 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 75633 00:08:32.059 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:32.059 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:32.059 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75633 00:08:32.318 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:32.318 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:32.318 killing process with pid 75633 00:08:32.318 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75633' 00:08:32.318 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 75633 00:08:32.318 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 75633 00:08:32.318 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:32.318 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:32.318 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:32.318 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:32.318 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:32.318 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:32.318 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:32.318 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:32.318 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:32.318 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:32.318 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:32.318 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:32.318 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:32.318 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:32.318 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:32.318 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:32.318 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:32.318 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:32.577 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:32.577 01:49:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:32.577 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:32.577 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:32.577 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:32.577 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.577 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:32.577 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.577 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:08:32.577 00:08:32.577 real 0m40.885s 00:08:32.577 user 1m4.048s 00:08:32.577 sys 0m12.757s 00:08:32.577 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.577 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:32.577 ************************************ 00:08:32.577 END TEST nvmf_lvs_grow 00:08:32.577 ************************************ 00:08:32.577 01:49:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:32.577 01:49:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:32.577 01:49:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.577 01:49:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:32.577 ************************************ 00:08:32.577 START TEST nvmf_bdev_io_wait 00:08:32.577 ************************************ 00:08:32.577 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:32.838 * Looking for test storage... 00:08:32.838 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:32.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.838 --rc genhtml_branch_coverage=1 00:08:32.838 --rc genhtml_function_coverage=1 00:08:32.838 --rc genhtml_legend=1 00:08:32.838 --rc geninfo_all_blocks=1 00:08:32.838 --rc geninfo_unexecuted_blocks=1 00:08:32.838 00:08:32.838 ' 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:32.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.838 --rc genhtml_branch_coverage=1 00:08:32.838 --rc genhtml_function_coverage=1 00:08:32.838 --rc genhtml_legend=1 00:08:32.838 --rc geninfo_all_blocks=1 00:08:32.838 --rc geninfo_unexecuted_blocks=1 00:08:32.838 00:08:32.838 ' 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:32.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.838 --rc genhtml_branch_coverage=1 00:08:32.838 --rc genhtml_function_coverage=1 00:08:32.838 --rc genhtml_legend=1 00:08:32.838 --rc geninfo_all_blocks=1 00:08:32.838 --rc geninfo_unexecuted_blocks=1 00:08:32.838 00:08:32.838 ' 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:32.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.838 --rc genhtml_branch_coverage=1 00:08:32.838 --rc genhtml_function_coverage=1 00:08:32.838 --rc genhtml_legend=1 00:08:32.838 --rc geninfo_all_blocks=1 00:08:32.838 --rc geninfo_unexecuted_blocks=1 00:08:32.838 00:08:32.838 ' 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:32.838 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:32.839 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:32.839 Cannot find device "nvmf_init_br" 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:32.839 Cannot find device "nvmf_init_br2" 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:32.839 Cannot find device "nvmf_tgt_br" 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:32.839 Cannot find device "nvmf_tgt_br2" 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:32.839 Cannot find device "nvmf_init_br" 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:08:32.839 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:33.098 Cannot find device "nvmf_init_br2" 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:33.098 Cannot find device "nvmf_tgt_br" 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:33.098 Cannot find device "nvmf_tgt_br2" 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:33.098 Cannot find device "nvmf_br" 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:33.098 Cannot find device "nvmf_init_if" 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:33.098 Cannot find device "nvmf_init_if2" 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:33.098 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:33.098 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:33.098 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:33.357 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:33.357 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:33.357 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:33.357 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:33.357 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:33.357 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:33.357 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:33.357 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:33.357 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:33.357 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:08:33.357 00:08:33.357 --- 10.0.0.3 ping statistics --- 00:08:33.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.357 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:08:33.357 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:33.357 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:33.357 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:08:33.357 00:08:33.357 --- 10.0.0.4 ping statistics --- 00:08:33.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.357 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:08:33.357 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:33.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:33.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:33.357 00:08:33.357 --- 10.0.0.1 ping statistics --- 00:08:33.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.358 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:33.358 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:33.358 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:33.358 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:08:33.358 00:08:33.358 --- 10.0.0.2 ping statistics --- 00:08:33.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.358 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:08:33.358 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:33.358 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:08:33.358 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:33.358 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:33.358 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:33.358 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:33.358 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:33.358 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:33.358 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:33.358 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:33.358 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:33.358 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:33.358 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:33.358 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=75991 00:08:33.358 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:33.358 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 75991 00:08:33.358 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 75991 ']' 00:08:33.358 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.358 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.358 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.358 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.358 01:49:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:33.358 [2024-11-19 01:49:43.857084] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:33.358 [2024-11-19 01:49:43.857184] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.617 [2024-11-19 01:49:44.008977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:33.617 [2024-11-19 01:49:44.036340] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:33.617 [2024-11-19 01:49:44.036410] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:33.617 [2024-11-19 01:49:44.036423] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:33.617 [2024-11-19 01:49:44.036433] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:33.617 [2024-11-19 01:49:44.036442] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:33.617 [2024-11-19 01:49:44.037349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.617 [2024-11-19 01:49:44.037435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.617 [2024-11-19 01:49:44.037714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:33.617 [2024-11-19 01:49:44.037723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.617 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:33.617 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:33.617 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:33.617 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:33.617 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:33.617 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.617 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:33.617 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.617 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:33.617 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.617 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:33.617 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.617 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:33.617 [2024-11-19 01:49:44.203958] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:33.617 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.617 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:33.617 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.617 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:33.617 [2024-11-19 01:49:44.215085] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.617 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.617 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:33.617 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.617 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:33.877 Malloc0 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:33.877 [2024-11-19 01:49:44.268364] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=76013 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=76015 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:33.877 { 00:08:33.877 "params": { 00:08:33.877 "name": "Nvme$subsystem", 00:08:33.877 "trtype": "$TEST_TRANSPORT", 00:08:33.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:33.877 "adrfam": "ipv4", 00:08:33.877 "trsvcid": "$NVMF_PORT", 00:08:33.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:33.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:33.877 "hdgst": ${hdgst:-false}, 00:08:33.877 "ddgst": ${ddgst:-false} 00:08:33.877 }, 00:08:33.877 "method": "bdev_nvme_attach_controller" 00:08:33.877 } 00:08:33.877 EOF 00:08:33.877 )") 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=76017 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:33.877 { 00:08:33.877 "params": { 00:08:33.877 "name": "Nvme$subsystem", 00:08:33.877 "trtype": "$TEST_TRANSPORT", 00:08:33.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:33.877 "adrfam": "ipv4", 00:08:33.877 "trsvcid": "$NVMF_PORT", 00:08:33.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:33.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:33.877 "hdgst": ${hdgst:-false}, 00:08:33.877 "ddgst": ${ddgst:-false} 00:08:33.877 }, 00:08:33.877 "method": "bdev_nvme_attach_controller" 00:08:33.877 } 00:08:33.877 EOF 00:08:33.877 )") 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=76020 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:33.877 { 00:08:33.877 "params": { 00:08:33.877 "name": "Nvme$subsystem", 00:08:33.877 "trtype": "$TEST_TRANSPORT", 00:08:33.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:33.877 "adrfam": "ipv4", 00:08:33.877 "trsvcid": "$NVMF_PORT", 00:08:33.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:33.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:33.877 "hdgst": ${hdgst:-false}, 00:08:33.877 "ddgst": ${ddgst:-false} 00:08:33.877 }, 00:08:33.877 "method": "bdev_nvme_attach_controller" 00:08:33.877 } 00:08:33.877 EOF 00:08:33.877 )") 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:33.877 "params": { 00:08:33.877 "name": "Nvme1", 00:08:33.877 "trtype": "tcp", 00:08:33.877 "traddr": "10.0.0.3", 00:08:33.877 "adrfam": "ipv4", 00:08:33.877 "trsvcid": "4420", 00:08:33.877 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:33.877 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:33.877 "hdgst": false, 00:08:33.877 "ddgst": false 00:08:33.877 }, 00:08:33.877 "method": "bdev_nvme_attach_controller" 00:08:33.877 }' 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:33.877 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:33.877 { 00:08:33.877 "params": { 00:08:33.877 "name": "Nvme$subsystem", 00:08:33.878 "trtype": "$TEST_TRANSPORT", 00:08:33.878 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:33.878 "adrfam": "ipv4", 00:08:33.878 "trsvcid": "$NVMF_PORT", 00:08:33.878 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:33.878 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:33.878 "hdgst": ${hdgst:-false}, 00:08:33.878 "ddgst": ${ddgst:-false} 00:08:33.878 }, 00:08:33.878 "method": "bdev_nvme_attach_controller" 00:08:33.878 } 00:08:33.878 EOF 00:08:33.878 )") 00:08:33.878 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:33.878 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:33.878 "params": { 00:08:33.878 "name": "Nvme1", 00:08:33.878 "trtype": "tcp", 00:08:33.878 "traddr": "10.0.0.3", 00:08:33.878 "adrfam": "ipv4", 00:08:33.878 "trsvcid": "4420", 00:08:33.878 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:33.878 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:33.878 "hdgst": false, 00:08:33.878 "ddgst": false 00:08:33.878 }, 00:08:33.878 "method": "bdev_nvme_attach_controller" 00:08:33.878 }' 00:08:33.878 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:33.878 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:33.878 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:33.878 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:33.878 "params": { 00:08:33.878 "name": "Nvme1", 00:08:33.878 "trtype": "tcp", 00:08:33.878 "traddr": "10.0.0.3", 00:08:33.878 "adrfam": "ipv4", 00:08:33.878 "trsvcid": "4420", 00:08:33.878 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:33.878 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:33.878 "hdgst": false, 00:08:33.878 "ddgst": false 00:08:33.878 }, 00:08:33.878 "method": "bdev_nvme_attach_controller" 00:08:33.878 }' 00:08:33.878 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:33.878 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:33.878 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:33.878 "params": { 00:08:33.878 "name": "Nvme1", 00:08:33.878 "trtype": "tcp", 00:08:33.878 "traddr": "10.0.0.3", 00:08:33.878 "adrfam": "ipv4", 00:08:33.878 "trsvcid": "4420", 00:08:33.878 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:33.878 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:33.878 "hdgst": false, 00:08:33.878 "ddgst": false 00:08:33.878 }, 00:08:33.878 "method": "bdev_nvme_attach_controller" 00:08:33.878 }' 00:08:33.878 [2024-11-19 01:49:44.332731] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:33.878 [2024-11-19 01:49:44.332815] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:33.878 01:49:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 76013 00:08:33.878 [2024-11-19 01:49:44.351406] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:33.878 [2024-11-19 01:49:44.351483] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:33.878 [2024-11-19 01:49:44.356470] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:33.878 [2024-11-19 01:49:44.356563] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:33.878 [2024-11-19 01:49:44.365713] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:33.878 [2024-11-19 01:49:44.365813] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:34.136 [2024-11-19 01:49:44.521176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.136 [2024-11-19 01:49:44.537618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:34.136 [2024-11-19 01:49:44.551539] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:34.136 [2024-11-19 01:49:44.558549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.136 [2024-11-19 01:49:44.574910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:34.136 [2024-11-19 01:49:44.588823] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:34.136 [2024-11-19 01:49:44.605286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.136 [2024-11-19 01:49:44.620991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:34.136 [2024-11-19 01:49:44.634869] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:34.136 Running I/O for 1 seconds... 00:08:34.136 [2024-11-19 01:49:44.651821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.136 [2024-11-19 01:49:44.667943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:34.136 [2024-11-19 01:49:44.681728] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:34.136 Running I/O for 1 seconds... 00:08:34.136 Running I/O for 1 seconds... 00:08:34.394 Running I/O for 1 seconds... 00:08:35.328 166264.00 IOPS, 649.47 MiB/s 00:08:35.328 Latency(us) 00:08:35.328 [2024-11-19T01:49:45.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.328 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:35.328 Nvme1n1 : 1.00 165923.06 648.14 0.00 0.00 767.38 383.53 2025.66 00:08:35.328 [2024-11-19T01:49:45.943Z] =================================================================================================================== 00:08:35.328 [2024-11-19T01:49:45.943Z] Total : 165923.06 648.14 0.00 0.00 767.38 383.53 2025.66 00:08:35.328 9537.00 IOPS, 37.25 MiB/s 00:08:35.328 Latency(us) 00:08:35.328 [2024-11-19T01:49:45.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.328 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:35.328 Nvme1n1 : 1.01 9575.52 37.40 0.00 0.00 13301.53 8400.52 20256.58 00:08:35.328 [2024-11-19T01:49:45.943Z] =================================================================================================================== 00:08:35.328 [2024-11-19T01:49:45.943Z] Total : 9575.52 37.40 0.00 0.00 13301.53 8400.52 20256.58 00:08:35.328 7897.00 IOPS, 30.85 MiB/s 00:08:35.328 Latency(us) 00:08:35.328 [2024-11-19T01:49:45.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.328 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:35.328 Nvme1n1 : 1.01 7960.32 31.10 0.00 0.00 15997.97 6494.02 25976.09 00:08:35.328 [2024-11-19T01:49:45.943Z] =================================================================================================================== 00:08:35.328 [2024-11-19T01:49:45.943Z] Total : 7960.32 31.10 0.00 0.00 15997.97 6494.02 25976.09 00:08:35.328 8185.00 IOPS, 31.97 MiB/s 00:08:35.328 Latency(us) 00:08:35.328 [2024-11-19T01:49:45.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.328 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:35.328 Nvme1n1 : 1.01 8261.83 32.27 0.00 0.00 15424.98 6851.49 23831.27 00:08:35.328 [2024-11-19T01:49:45.943Z] =================================================================================================================== 00:08:35.328 [2024-11-19T01:49:45.944Z] Total : 8261.83 32.27 0.00 0.00 15424.98 6851.49 23831.27 00:08:35.329 01:49:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 76015 00:08:35.329 01:49:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 76017 00:08:35.329 01:49:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 76020 00:08:35.329 01:49:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:35.329 01:49:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.329 01:49:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.329 01:49:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.329 01:49:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:35.329 01:49:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:35.329 01:49:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:35.329 01:49:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:35.587 01:49:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:35.587 01:49:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:35.587 01:49:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:35.587 01:49:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:35.587 rmmod nvme_tcp 00:08:35.587 rmmod nvme_fabrics 00:08:35.587 rmmod nvme_keyring 00:08:35.587 01:49:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:35.587 01:49:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:35.587 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:35.587 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 75991 ']' 00:08:35.587 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 75991 00:08:35.587 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 75991 ']' 00:08:35.587 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 75991 00:08:35.587 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:35.587 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.587 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75991 00:08:35.587 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:35.587 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:35.587 killing process with pid 75991 00:08:35.587 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75991' 00:08:35.587 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 75991 00:08:35.588 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 75991 00:08:35.588 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:35.588 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:35.588 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:35.588 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:35.588 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:35.588 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:35.588 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:35.588 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:35.588 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:35.588 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:35.588 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:35.588 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:35.588 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:35.847 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:35.847 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:35.847 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:35.847 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:35.847 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:35.847 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:35.847 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:35.847 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:35.847 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:35.847 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:35.847 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.847 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.847 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.847 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:08:35.847 00:08:35.847 real 0m3.225s 00:08:35.847 user 0m12.479s 00:08:35.847 sys 0m2.058s 00:08:35.847 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.847 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.847 ************************************ 00:08:35.847 END TEST nvmf_bdev_io_wait 00:08:35.847 ************************************ 00:08:35.847 01:49:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:35.847 01:49:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:35.847 01:49:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.847 01:49:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:35.847 ************************************ 00:08:35.847 START TEST nvmf_queue_depth 00:08:35.847 ************************************ 00:08:35.847 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:36.107 * Looking for test storage... 00:08:36.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:36.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.107 --rc genhtml_branch_coverage=1 00:08:36.107 --rc genhtml_function_coverage=1 00:08:36.107 --rc genhtml_legend=1 00:08:36.107 --rc geninfo_all_blocks=1 00:08:36.107 --rc geninfo_unexecuted_blocks=1 00:08:36.107 00:08:36.107 ' 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:36.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.107 --rc genhtml_branch_coverage=1 00:08:36.107 --rc genhtml_function_coverage=1 00:08:36.107 --rc genhtml_legend=1 00:08:36.107 --rc geninfo_all_blocks=1 00:08:36.107 --rc geninfo_unexecuted_blocks=1 00:08:36.107 00:08:36.107 ' 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:36.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.107 --rc genhtml_branch_coverage=1 00:08:36.107 --rc genhtml_function_coverage=1 00:08:36.107 --rc genhtml_legend=1 00:08:36.107 --rc geninfo_all_blocks=1 00:08:36.107 --rc geninfo_unexecuted_blocks=1 00:08:36.107 00:08:36.107 ' 00:08:36.107 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:36.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.107 --rc genhtml_branch_coverage=1 00:08:36.107 --rc genhtml_function_coverage=1 00:08:36.107 --rc genhtml_legend=1 00:08:36.107 --rc geninfo_all_blocks=1 00:08:36.107 --rc geninfo_unexecuted_blocks=1 00:08:36.107 00:08:36.107 ' 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:36.108 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:36.108 Cannot find device "nvmf_init_br" 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:36.108 Cannot find device "nvmf_init_br2" 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:36.108 Cannot find device "nvmf_tgt_br" 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:36.108 Cannot find device "nvmf_tgt_br2" 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:36.108 Cannot find device "nvmf_init_br" 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:36.108 Cannot find device "nvmf_init_br2" 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:08:36.108 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:36.368 Cannot find device "nvmf_tgt_br" 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:36.368 Cannot find device "nvmf_tgt_br2" 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:36.368 Cannot find device "nvmf_br" 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:36.368 Cannot find device "nvmf_init_if" 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:36.368 Cannot find device "nvmf_init_if2" 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:36.368 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:36.368 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:36.368 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:36.627 01:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:36.627 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:36.627 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:36.627 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:36.627 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:36.627 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:36.627 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:36.627 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:36.627 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:36.627 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:08:36.627 00:08:36.627 --- 10.0.0.3 ping statistics --- 00:08:36.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.627 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:08:36.627 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:36.627 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:36.627 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:08:36.627 00:08:36.627 --- 10.0.0.4 ping statistics --- 00:08:36.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.627 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:08:36.627 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:36.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:36.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:08:36.627 00:08:36.627 --- 10.0.0.1 ping statistics --- 00:08:36.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.627 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:08:36.627 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:36.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:36.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:08:36.627 00:08:36.627 --- 10.0.0.2 ping statistics --- 00:08:36.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.627 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:08:36.627 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:36.627 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:08:36.627 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:36.627 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:36.627 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:36.627 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:36.627 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:36.627 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:36.627 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:36.627 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:36.627 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:36.627 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:36.627 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:36.627 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=76280 00:08:36.627 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 76280 00:08:36.627 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 76280 ']' 00:08:36.627 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:36.627 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.627 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:36.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.627 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.627 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:36.627 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:36.627 [2024-11-19 01:49:47.109410] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:36.627 [2024-11-19 01:49:47.109551] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.886 [2024-11-19 01:49:47.256510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.886 [2024-11-19 01:49:47.275488] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:36.886 [2024-11-19 01:49:47.275581] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:36.887 [2024-11-19 01:49:47.275608] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:36.887 [2024-11-19 01:49:47.275616] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:36.887 [2024-11-19 01:49:47.275622] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:36.887 [2024-11-19 01:49:47.275953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.887 [2024-11-19 01:49:47.307289] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.887 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:36.887 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:36.887 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:36.887 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:36.887 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:36.887 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:36.887 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:36.887 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.887 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:36.887 [2024-11-19 01:49:47.425092] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.887 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.887 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:36.887 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.887 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:36.887 Malloc0 00:08:36.887 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.887 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:36.887 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.887 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:36.887 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.887 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:36.887 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.887 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:36.887 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.887 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:36.887 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.887 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:36.887 [2024-11-19 01:49:47.467706] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:36.887 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.887 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=76299 00:08:36.887 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:36.887 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:36.887 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 76299 /var/tmp/bdevperf.sock 00:08:36.887 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 76299 ']' 00:08:36.887 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:36.887 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:36.887 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:36.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:36.887 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:36.887 01:49:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:37.146 [2024-11-19 01:49:47.530048] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:37.146 [2024-11-19 01:49:47.530148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76299 ] 00:08:37.146 [2024-11-19 01:49:47.682107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.146 [2024-11-19 01:49:47.706545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.146 [2024-11-19 01:49:47.741166] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:38.081 01:49:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.081 01:49:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:38.081 01:49:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:38.081 01:49:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.081 01:49:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:38.081 NVMe0n1 00:08:38.081 01:49:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.081 01:49:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:38.340 Running I/O for 10 seconds... 00:08:40.210 7168.00 IOPS, 28.00 MiB/s [2024-11-19T01:49:51.763Z] 7877.50 IOPS, 30.77 MiB/s [2024-11-19T01:49:53.141Z] 7946.33 IOPS, 31.04 MiB/s [2024-11-19T01:49:54.079Z] 7978.00 IOPS, 31.16 MiB/s [2024-11-19T01:49:55.017Z] 7911.20 IOPS, 30.90 MiB/s [2024-11-19T01:49:55.953Z] 7872.00 IOPS, 30.75 MiB/s [2024-11-19T01:49:56.890Z] 7842.00 IOPS, 30.63 MiB/s [2024-11-19T01:49:57.853Z] 7822.00 IOPS, 30.55 MiB/s [2024-11-19T01:49:58.790Z] 7837.00 IOPS, 30.61 MiB/s [2024-11-19T01:49:59.049Z] 7843.90 IOPS, 30.64 MiB/s 00:08:48.434 Latency(us) 00:08:48.434 [2024-11-19T01:49:59.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:48.434 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:48.434 Verification LBA range: start 0x0 length 0x4000 00:08:48.434 NVMe0n1 : 10.10 7868.62 30.74 0.00 0.00 129387.81 27644.28 94371.84 00:08:48.434 [2024-11-19T01:49:59.049Z] =================================================================================================================== 00:08:48.434 [2024-11-19T01:49:59.049Z] Total : 7868.62 30.74 0.00 0.00 129387.81 27644.28 94371.84 00:08:48.434 { 00:08:48.434 "results": [ 00:08:48.434 { 00:08:48.434 "job": "NVMe0n1", 00:08:48.434 "core_mask": "0x1", 00:08:48.434 "workload": "verify", 00:08:48.434 "status": "finished", 00:08:48.434 "verify_range": { 00:08:48.434 "start": 0, 00:08:48.434 "length": 16384 00:08:48.434 }, 00:08:48.434 "queue_depth": 1024, 00:08:48.434 "io_size": 4096, 00:08:48.434 "runtime": 10.098723, 00:08:48.434 "iops": 7868.618636237473, 00:08:48.434 "mibps": 30.73679154780263, 00:08:48.434 "io_failed": 0, 00:08:48.434 "io_timeout": 0, 00:08:48.434 "avg_latency_us": 129387.80728110168, 00:08:48.434 "min_latency_us": 27644.276363636363, 00:08:48.434 "max_latency_us": 94371.84 00:08:48.434 } 00:08:48.434 ], 00:08:48.434 "core_count": 1 00:08:48.434 } 00:08:48.434 01:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 76299 00:08:48.434 01:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 76299 ']' 00:08:48.434 01:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 76299 00:08:48.434 01:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:48.434 01:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:48.434 01:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76299 00:08:48.434 01:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:48.434 01:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:48.434 killing process with pid 76299 00:08:48.434 01:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76299' 00:08:48.434 Received shutdown signal, test time was about 10.000000 seconds 00:08:48.434 00:08:48.434 Latency(us) 00:08:48.434 [2024-11-19T01:49:59.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:48.434 [2024-11-19T01:49:59.049Z] =================================================================================================================== 00:08:48.434 [2024-11-19T01:49:59.049Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:48.434 01:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 76299 00:08:48.434 01:49:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 76299 00:08:48.434 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:48.434 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:48.434 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:48.434 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:48.695 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:48.695 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:48.695 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:48.695 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:48.695 rmmod nvme_tcp 00:08:48.695 rmmod nvme_fabrics 00:08:48.695 rmmod nvme_keyring 00:08:48.695 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:48.695 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:48.695 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:48.695 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 76280 ']' 00:08:48.695 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 76280 00:08:48.695 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 76280 ']' 00:08:48.695 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 76280 00:08:48.695 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:48.695 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:48.695 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76280 00:08:48.695 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:48.695 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:48.695 killing process with pid 76280 00:08:48.695 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76280' 00:08:48.695 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 76280 00:08:48.695 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 76280 00:08:48.955 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:48.955 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:48.955 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:48.955 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:48.955 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:48.955 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:48.955 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:48.955 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:48.955 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:48.955 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:48.955 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:48.955 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:48.955 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:48.955 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:48.955 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:48.955 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:48.955 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:48.955 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:48.955 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:48.955 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:48.955 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:48.955 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:48.955 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:48.955 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.955 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.955 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.214 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:08:49.214 00:08:49.214 real 0m13.132s 00:08:49.214 user 0m22.851s 00:08:49.214 sys 0m2.143s 00:08:49.214 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.214 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:49.214 ************************************ 00:08:49.214 END TEST nvmf_queue_depth 00:08:49.214 ************************************ 00:08:49.214 01:49:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:49.214 01:49:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:49.214 01:49:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.214 01:49:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:49.214 ************************************ 00:08:49.214 START TEST nvmf_target_multipath 00:08:49.215 ************************************ 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:49.215 * Looking for test storage... 00:08:49.215 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:49.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.215 --rc genhtml_branch_coverage=1 00:08:49.215 --rc genhtml_function_coverage=1 00:08:49.215 --rc genhtml_legend=1 00:08:49.215 --rc geninfo_all_blocks=1 00:08:49.215 --rc geninfo_unexecuted_blocks=1 00:08:49.215 00:08:49.215 ' 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:49.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.215 --rc genhtml_branch_coverage=1 00:08:49.215 --rc genhtml_function_coverage=1 00:08:49.215 --rc genhtml_legend=1 00:08:49.215 --rc geninfo_all_blocks=1 00:08:49.215 --rc geninfo_unexecuted_blocks=1 00:08:49.215 00:08:49.215 ' 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:49.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.215 --rc genhtml_branch_coverage=1 00:08:49.215 --rc genhtml_function_coverage=1 00:08:49.215 --rc genhtml_legend=1 00:08:49.215 --rc geninfo_all_blocks=1 00:08:49.215 --rc geninfo_unexecuted_blocks=1 00:08:49.215 00:08:49.215 ' 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:49.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.215 --rc genhtml_branch_coverage=1 00:08:49.215 --rc genhtml_function_coverage=1 00:08:49.215 --rc genhtml_legend=1 00:08:49.215 --rc geninfo_all_blocks=1 00:08:49.215 --rc geninfo_unexecuted_blocks=1 00:08:49.215 00:08:49.215 ' 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.215 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:49.216 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:49.216 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:49.216 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:49.216 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:49.475 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:49.475 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:49.475 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:49.475 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:49.475 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:49.475 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:49.475 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.475 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:49.475 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:49.475 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:49.475 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.475 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.475 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.475 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:49.475 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:49.475 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:49.475 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:49.475 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:49.475 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:49.475 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:49.475 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:49.476 Cannot find device "nvmf_init_br" 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:49.476 Cannot find device "nvmf_init_br2" 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:49.476 Cannot find device "nvmf_tgt_br" 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:49.476 Cannot find device "nvmf_tgt_br2" 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:49.476 Cannot find device "nvmf_init_br" 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:49.476 Cannot find device "nvmf_init_br2" 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:49.476 Cannot find device "nvmf_tgt_br" 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:49.476 Cannot find device "nvmf_tgt_br2" 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:49.476 Cannot find device "nvmf_br" 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:49.476 Cannot find device "nvmf_init_if" 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:49.476 Cannot find device "nvmf_init_if2" 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:49.476 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:49.476 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:49.476 01:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:49.476 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:49.476 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:49.476 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:49.476 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:49.476 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:49.476 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:49.476 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:49.476 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:49.476 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:49.736 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:49.736 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:49.736 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:49.736 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:49.736 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:49.736 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:49.736 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:49.736 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:49.736 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:49.736 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:49.736 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:49.736 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:49.736 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:49.736 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:49.736 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:49.736 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:49.736 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:49.736 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:49.736 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:49.736 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:49.736 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:08:49.736 00:08:49.736 --- 10.0.0.3 ping statistics --- 00:08:49.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.736 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:08:49.736 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:49.736 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:49.736 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:08:49.736 00:08:49.736 --- 10.0.0.4 ping statistics --- 00:08:49.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.736 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:08:49.737 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:49.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:49.737 00:08:49.737 --- 10.0.0.1 ping statistics --- 00:08:49.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.737 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:49.737 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:49.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:08:49.737 00:08:49.737 --- 10.0.0.2 ping statistics --- 00:08:49.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.737 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:08:49.737 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.737 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:08:49.737 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:49.737 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.737 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:49.737 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:49.737 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.737 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:49.737 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:49.737 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:08:49.737 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:49.737 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:49.737 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:49.737 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:49.737 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:49.737 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=76677 00:08:49.737 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 76677 00:08:49.737 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:49.737 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 76677 ']' 00:08:49.737 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.737 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.737 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.737 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.737 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:49.737 [2024-11-19 01:50:00.315081] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:08:49.737 [2024-11-19 01:50:00.315175] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.997 [2024-11-19 01:50:00.470579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:49.997 [2024-11-19 01:50:00.495914] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:49.997 [2024-11-19 01:50:00.495977] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:49.997 [2024-11-19 01:50:00.495991] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:49.997 [2024-11-19 01:50:00.496002] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:49.997 [2024-11-19 01:50:00.496010] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:49.997 [2024-11-19 01:50:00.496844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.997 [2024-11-19 01:50:00.496986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:49.997 [2024-11-19 01:50:00.497127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:49.997 [2024-11-19 01:50:00.497134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.997 [2024-11-19 01:50:00.532110] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:49.997 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.997 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:08:49.997 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:49.997 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:49.997 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:50.256 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.256 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:50.515 [2024-11-19 01:50:00.903970] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:50.515 01:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:50.774 Malloc0 00:08:50.774 01:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:51.032 01:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:51.291 01:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:51.549 [2024-11-19 01:50:02.088776] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:51.549 01:50:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:08:51.807 [2024-11-19 01:50:02.361103] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:08:51.807 01:50:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid=7cdc77f7-6c10-48d3-83fa-703a290bdf89 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:52.064 01:50:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid=7cdc77f7-6c10-48d3-83fa-703a290bdf89 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:08:52.064 01:50:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:52.064 01:50:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:08:52.064 01:50:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:52.064 01:50:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:52.064 01:50:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:08:54.593 01:50:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:54.593 01:50:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:54.593 01:50:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:54.593 01:50:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:54.593 01:50:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:54.593 01:50:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:08:54.593 01:50:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:54.593 01:50:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:54.593 01:50:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:54.593 01:50:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:54.593 01:50:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:54.593 01:50:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:54.593 01:50:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:08:54.593 01:50:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:54.593 01:50:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:54.593 01:50:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:54.593 01:50:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:54.593 01:50:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:54.593 01:50:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:54.593 01:50:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:54.593 01:50:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:54.593 01:50:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:54.593 01:50:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:54.593 01:50:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:54.593 01:50:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:54.593 01:50:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:54.593 01:50:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:54.593 01:50:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:54.593 01:50:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:54.593 01:50:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:54.593 01:50:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:54.593 01:50:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:08:54.593 01:50:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=76764 00:08:54.593 01:50:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:08:54.593 01:50:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:54.593 [global] 00:08:54.593 thread=1 00:08:54.593 invalidate=1 00:08:54.593 rw=randrw 00:08:54.593 time_based=1 00:08:54.593 runtime=6 00:08:54.593 ioengine=libaio 00:08:54.593 direct=1 00:08:54.593 bs=4096 00:08:54.593 iodepth=128 00:08:54.593 norandommap=0 00:08:54.593 numjobs=1 00:08:54.593 00:08:54.593 verify_dump=1 00:08:54.593 verify_backlog=512 00:08:54.593 verify_state_save=0 00:08:54.593 do_verify=1 00:08:54.593 verify=crc32c-intel 00:08:54.593 [job0] 00:08:54.593 filename=/dev/nvme0n1 00:08:54.593 Could not set queue depth (nvme0n1) 00:08:54.593 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:54.593 fio-3.35 00:08:54.593 Starting 1 thread 00:08:55.160 01:50:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:55.418 01:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:55.676 01:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:55.676 01:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:55.676 01:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:55.676 01:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:55.677 01:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:55.677 01:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:55.677 01:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:55.677 01:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:55.677 01:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:55.677 01:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:55.677 01:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:55.677 01:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:55.677 01:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:56.245 01:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:56.245 01:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:08:56.506 01:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:56.506 01:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:56.506 01:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:56.506 01:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:56.506 01:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:56.506 01:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:08:56.506 01:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:56.506 01:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:56.506 01:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:56.506 01:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:56.506 01:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:56.506 01:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 76764 00:09:00.699 00:09:00.699 job0: (groupid=0, jobs=1): err= 0: pid=76785: Tue Nov 19 01:50:11 2024 00:09:00.699 read: IOPS=10.5k, BW=40.9MiB/s (42.9MB/s)(246MiB/6007msec) 00:09:00.699 slat (usec): min=3, max=6409, avg=56.65, stdev=227.32 00:09:00.699 clat (usec): min=1609, max=15183, avg=8351.94, stdev=1472.17 00:09:00.699 lat (usec): min=1618, max=15205, avg=8408.59, stdev=1476.05 00:09:00.699 clat percentiles (usec): 00:09:00.699 | 1.00th=[ 4359], 5.00th=[ 6259], 10.00th=[ 7046], 20.00th=[ 7504], 00:09:00.699 | 30.00th=[ 7767], 40.00th=[ 8029], 50.00th=[ 8225], 60.00th=[ 8455], 00:09:00.699 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9634], 95.00th=[11600], 00:09:00.699 | 99.00th=[13173], 99.50th=[13435], 99.90th=[14222], 99.95th=[14484], 00:09:00.699 | 99.99th=[14877] 00:09:00.699 bw ( KiB/s): min= 8008, max=26832, per=50.86%, avg=21300.91, stdev=6090.87, samples=11 00:09:00.699 iops : min= 2002, max= 6708, avg=5325.18, stdev=1522.69, samples=11 00:09:00.699 write: IOPS=6090, BW=23.8MiB/s (24.9MB/s)(127MiB/5327msec); 0 zone resets 00:09:00.699 slat (usec): min=4, max=1756, avg=65.43, stdev=159.04 00:09:00.699 clat (usec): min=2243, max=14383, avg=7295.76, stdev=1323.52 00:09:00.699 lat (usec): min=2267, max=14463, avg=7361.18, stdev=1328.25 00:09:00.699 clat percentiles (usec): 00:09:00.699 | 1.00th=[ 3294], 5.00th=[ 4293], 10.00th=[ 5669], 20.00th=[ 6718], 00:09:00.699 | 30.00th=[ 7046], 40.00th=[ 7242], 50.00th=[ 7504], 60.00th=[ 7701], 00:09:00.699 | 70.00th=[ 7898], 80.00th=[ 8160], 90.00th=[ 8455], 95.00th=[ 8848], 00:09:00.699 | 99.00th=[11076], 99.50th=[11994], 99.90th=[13042], 99.95th=[13566], 00:09:00.699 | 99.99th=[14222] 00:09:00.699 bw ( KiB/s): min= 8280, max=26288, per=87.63%, avg=21350.18, stdev=5788.23, samples=11 00:09:00.699 iops : min= 2070, max= 6572, avg=5337.45, stdev=1447.00, samples=11 00:09:00.699 lat (msec) : 2=0.01%, 4=1.56%, 10=92.39%, 20=6.04% 00:09:00.699 cpu : usr=5.71%, sys=20.85%, ctx=5484, majf=0, minf=127 00:09:00.699 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:00.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:00.699 issued rwts: total=62889,32445,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:00.699 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:00.699 00:09:00.699 Run status group 0 (all jobs): 00:09:00.699 READ: bw=40.9MiB/s (42.9MB/s), 40.9MiB/s-40.9MiB/s (42.9MB/s-42.9MB/s), io=246MiB (258MB), run=6007-6007msec 00:09:00.699 WRITE: bw=23.8MiB/s (24.9MB/s), 23.8MiB/s-23.8MiB/s (24.9MB/s-24.9MB/s), io=127MiB (133MB), run=5327-5327msec 00:09:00.699 00:09:00.699 Disk stats (read/write): 00:09:00.699 nvme0n1: ios=61995/31815, merge=0/0, ticks=496681/218257, in_queue=714938, util=98.71% 00:09:00.699 01:50:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:00.699 01:50:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:09:01.266 01:50:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:01.266 01:50:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:01.266 01:50:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:01.266 01:50:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:01.266 01:50:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:01.266 01:50:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:01.266 01:50:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:01.266 01:50:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:01.266 01:50:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:01.266 01:50:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:01.266 01:50:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:01.266 01:50:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:01.266 01:50:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:01.266 01:50:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=76866 00:09:01.266 01:50:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:01.266 01:50:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:01.266 [global] 00:09:01.266 thread=1 00:09:01.266 invalidate=1 00:09:01.266 rw=randrw 00:09:01.266 time_based=1 00:09:01.266 runtime=6 00:09:01.266 ioengine=libaio 00:09:01.266 direct=1 00:09:01.266 bs=4096 00:09:01.266 iodepth=128 00:09:01.266 norandommap=0 00:09:01.266 numjobs=1 00:09:01.266 00:09:01.266 verify_dump=1 00:09:01.266 verify_backlog=512 00:09:01.266 verify_state_save=0 00:09:01.266 do_verify=1 00:09:01.266 verify=crc32c-intel 00:09:01.266 [job0] 00:09:01.266 filename=/dev/nvme0n1 00:09:01.266 Could not set queue depth (nvme0n1) 00:09:01.266 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:01.266 fio-3.35 00:09:01.266 Starting 1 thread 00:09:02.205 01:50:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:02.464 01:50:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:02.794 01:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:02.794 01:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:02.794 01:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:02.794 01:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:02.794 01:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:02.794 01:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:02.794 01:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:02.794 01:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:02.794 01:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:02.794 01:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:02.794 01:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:02.794 01:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:02.794 01:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:03.064 01:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:03.324 01:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:03.324 01:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:03.324 01:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:03.324 01:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:03.324 01:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:03.324 01:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:03.324 01:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:03.324 01:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:03.324 01:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:03.324 01:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:03.324 01:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:03.324 01:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:03.324 01:50:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 76866 00:09:07.517 00:09:07.517 job0: (groupid=0, jobs=1): err= 0: pid=76887: Tue Nov 19 01:50:17 2024 00:09:07.517 read: IOPS=11.4k, BW=44.6MiB/s (46.8MB/s)(268MiB/6006msec) 00:09:07.517 slat (usec): min=2, max=15074, avg=42.74, stdev=196.91 00:09:07.517 clat (usec): min=284, max=22925, avg=7638.22, stdev=2065.93 00:09:07.517 lat (usec): min=291, max=22975, avg=7680.97, stdev=2080.57 00:09:07.517 clat percentiles (usec): 00:09:07.517 | 1.00th=[ 2507], 5.00th=[ 4228], 10.00th=[ 4817], 20.00th=[ 5997], 00:09:07.517 | 30.00th=[ 6980], 40.00th=[ 7570], 50.00th=[ 7898], 60.00th=[ 8094], 00:09:07.517 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9503], 95.00th=[11469], 00:09:07.517 | 99.00th=[13304], 99.50th=[14091], 99.90th=[17433], 99.95th=[17695], 00:09:07.517 | 99.99th=[18482] 00:09:07.517 bw ( KiB/s): min= 7128, max=41832, per=53.23%, avg=24306.91, stdev=10272.77, samples=11 00:09:07.517 iops : min= 1782, max=10458, avg=6076.73, stdev=2568.19, samples=11 00:09:07.517 write: IOPS=6839, BW=26.7MiB/s (28.0MB/s)(143MiB/5341msec); 0 zone resets 00:09:07.517 slat (usec): min=3, max=6727, avg=54.23, stdev=142.40 00:09:07.517 clat (usec): min=298, max=15770, avg=6542.42, stdev=1907.16 00:09:07.517 lat (usec): min=317, max=15790, avg=6596.65, stdev=1920.24 00:09:07.517 clat percentiles (usec): 00:09:07.517 | 1.00th=[ 2376], 5.00th=[ 3261], 10.00th=[ 3785], 20.00th=[ 4555], 00:09:07.517 | 30.00th=[ 5538], 40.00th=[ 6718], 50.00th=[ 7111], 60.00th=[ 7373], 00:09:07.517 | 70.00th=[ 7635], 80.00th=[ 7898], 90.00th=[ 8356], 95.00th=[ 8848], 00:09:07.517 | 99.00th=[11207], 99.50th=[12125], 99.90th=[14353], 99.95th=[14877], 00:09:07.517 | 99.99th=[15664] 00:09:07.517 bw ( KiB/s): min= 7448, max=41064, per=88.96%, avg=24338.18, stdev=10132.35, samples=11 00:09:07.517 iops : min= 1862, max=10266, avg=6084.55, stdev=2533.09, samples=11 00:09:07.517 lat (usec) : 500=0.02%, 750=0.07%, 1000=0.13% 00:09:07.517 lat (msec) : 2=0.48%, 4=6.33%, 10=87.01%, 20=5.95%, 50=0.01% 00:09:07.517 cpu : usr=6.11%, sys=22.48%, ctx=5910, majf=0, minf=127 00:09:07.517 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:07.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.517 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:07.517 issued rwts: total=68561,36529,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.517 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:07.517 00:09:07.517 Run status group 0 (all jobs): 00:09:07.517 READ: bw=44.6MiB/s (46.8MB/s), 44.6MiB/s-44.6MiB/s (46.8MB/s-46.8MB/s), io=268MiB (281MB), run=6006-6006msec 00:09:07.517 WRITE: bw=26.7MiB/s (28.0MB/s), 26.7MiB/s-26.7MiB/s (28.0MB/s-28.0MB/s), io=143MiB (150MB), run=5341-5341msec 00:09:07.517 00:09:07.517 Disk stats (read/write): 00:09:07.517 nvme0n1: ios=67651/35906, merge=0/0, ticks=493353/219060, in_queue=712413, util=98.65% 00:09:07.517 01:50:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:07.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:07.517 01:50:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:07.517 01:50:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:09:07.517 01:50:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:07.517 01:50:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:07.517 01:50:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:07.517 01:50:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:07.517 01:50:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:09:07.517 01:50:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:07.776 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:07.776 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:07.776 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:07.776 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:07.776 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:07.776 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:07.776 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:07.776 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:07.776 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:07.776 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:07.776 rmmod nvme_tcp 00:09:07.776 rmmod nvme_fabrics 00:09:07.776 rmmod nvme_keyring 00:09:08.036 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:08.036 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:08.036 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:08.036 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 76677 ']' 00:09:08.036 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 76677 00:09:08.036 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 76677 ']' 00:09:08.036 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 76677 00:09:08.036 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:09:08.036 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:08.036 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76677 00:09:08.036 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:08.036 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:08.036 killing process with pid 76677 00:09:08.036 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76677' 00:09:08.036 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 76677 00:09:08.036 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 76677 00:09:08.036 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:08.036 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:08.036 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:08.036 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:08.036 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:08.036 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:08.036 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:08.036 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:08.036 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:08.036 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:08.036 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:08.036 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:08.036 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:08.036 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:08.296 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:08.296 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:08.296 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:08.296 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:08.296 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:08.296 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:08.296 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:08.296 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:08.296 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:08.296 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.296 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.296 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.296 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:09:08.296 00:09:08.296 real 0m19.200s 00:09:08.296 user 1m10.929s 00:09:08.296 sys 0m10.132s 00:09:08.296 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.296 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:08.297 ************************************ 00:09:08.297 END TEST nvmf_target_multipath 00:09:08.297 ************************************ 00:09:08.297 01:50:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:08.297 01:50:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:08.297 01:50:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.297 01:50:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:08.297 ************************************ 00:09:08.297 START TEST nvmf_zcopy 00:09:08.297 ************************************ 00:09:08.297 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:08.557 * Looking for test storage... 00:09:08.557 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:08.557 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:08.557 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:09:08.557 01:50:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:08.557 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:08.557 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:08.557 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:08.557 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:08.557 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.557 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:08.557 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:08.557 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:08.557 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:08.557 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:08.557 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:08.557 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:08.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.558 --rc genhtml_branch_coverage=1 00:09:08.558 --rc genhtml_function_coverage=1 00:09:08.558 --rc genhtml_legend=1 00:09:08.558 --rc geninfo_all_blocks=1 00:09:08.558 --rc geninfo_unexecuted_blocks=1 00:09:08.558 00:09:08.558 ' 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:08.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.558 --rc genhtml_branch_coverage=1 00:09:08.558 --rc genhtml_function_coverage=1 00:09:08.558 --rc genhtml_legend=1 00:09:08.558 --rc geninfo_all_blocks=1 00:09:08.558 --rc geninfo_unexecuted_blocks=1 00:09:08.558 00:09:08.558 ' 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:08.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.558 --rc genhtml_branch_coverage=1 00:09:08.558 --rc genhtml_function_coverage=1 00:09:08.558 --rc genhtml_legend=1 00:09:08.558 --rc geninfo_all_blocks=1 00:09:08.558 --rc geninfo_unexecuted_blocks=1 00:09:08.558 00:09:08.558 ' 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:08.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.558 --rc genhtml_branch_coverage=1 00:09:08.558 --rc genhtml_function_coverage=1 00:09:08.558 --rc genhtml_legend=1 00:09:08.558 --rc geninfo_all_blocks=1 00:09:08.558 --rc geninfo_unexecuted_blocks=1 00:09:08.558 00:09:08.558 ' 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:08.558 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:08.558 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:08.559 Cannot find device "nvmf_init_br" 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:08.559 Cannot find device "nvmf_init_br2" 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:08.559 Cannot find device "nvmf_tgt_br" 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:08.559 Cannot find device "nvmf_tgt_br2" 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:08.559 Cannot find device "nvmf_init_br" 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:09:08.559 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:08.818 Cannot find device "nvmf_init_br2" 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:08.818 Cannot find device "nvmf_tgt_br" 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:08.818 Cannot find device "nvmf_tgt_br2" 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:08.818 Cannot find device "nvmf_br" 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:08.818 Cannot find device "nvmf_init_if" 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:08.818 Cannot find device "nvmf_init_if2" 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:08.818 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:08.818 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:08.818 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:09.078 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:09.078 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:09.078 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:09.078 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:09.078 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:09.078 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:09.078 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:09.078 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:09.078 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:09.078 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:09.078 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:09.078 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:09:09.078 00:09:09.078 --- 10.0.0.3 ping statistics --- 00:09:09.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.078 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:09:09.078 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:09.078 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:09.078 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:09:09.078 00:09:09.078 --- 10.0.0.4 ping statistics --- 00:09:09.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.078 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:09:09.078 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:09.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:09.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:09:09.078 00:09:09.078 --- 10.0.0.1 ping statistics --- 00:09:09.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.078 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:09:09.078 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:09.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:09.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:09:09.078 00:09:09.078 --- 10.0.0.2 ping statistics --- 00:09:09.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.078 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:09:09.078 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:09.078 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:09:09.078 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:09.078 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:09.078 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:09.078 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:09.078 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:09.078 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:09.078 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:09.078 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:09.078 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:09.078 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:09.078 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.078 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=77197 00:09:09.078 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:09.078 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 77197 00:09:09.078 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 77197 ']' 00:09:09.078 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.078 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:09.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.078 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.078 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:09.078 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.078 [2024-11-19 01:50:19.584820] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:09:09.078 [2024-11-19 01:50:19.584913] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.338 [2024-11-19 01:50:19.733553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.338 [2024-11-19 01:50:19.758217] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.338 [2024-11-19 01:50:19.758309] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.338 [2024-11-19 01:50:19.758323] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.338 [2024-11-19 01:50:19.758333] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.338 [2024-11-19 01:50:19.758342] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.338 [2024-11-19 01:50:19.758784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.338 [2024-11-19 01:50:19.793492] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.338 [2024-11-19 01:50:19.888217] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.338 [2024-11-19 01:50:19.908254] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.338 malloc0 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:09.338 { 00:09:09.338 "params": { 00:09:09.338 "name": "Nvme$subsystem", 00:09:09.338 "trtype": "$TEST_TRANSPORT", 00:09:09.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:09.338 "adrfam": "ipv4", 00:09:09.338 "trsvcid": "$NVMF_PORT", 00:09:09.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:09.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:09.338 "hdgst": ${hdgst:-false}, 00:09:09.338 "ddgst": ${ddgst:-false} 00:09:09.338 }, 00:09:09.338 "method": "bdev_nvme_attach_controller" 00:09:09.338 } 00:09:09.338 EOF 00:09:09.338 )") 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:09.338 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:09.599 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:09.599 01:50:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:09.599 "params": { 00:09:09.599 "name": "Nvme1", 00:09:09.599 "trtype": "tcp", 00:09:09.599 "traddr": "10.0.0.3", 00:09:09.599 "adrfam": "ipv4", 00:09:09.599 "trsvcid": "4420", 00:09:09.599 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:09.599 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:09.599 "hdgst": false, 00:09:09.599 "ddgst": false 00:09:09.599 }, 00:09:09.599 "method": "bdev_nvme_attach_controller" 00:09:09.599 }' 00:09:09.599 [2024-11-19 01:50:19.997240] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:09:09.599 [2024-11-19 01:50:19.997355] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77217 ] 00:09:09.599 [2024-11-19 01:50:20.144454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.599 [2024-11-19 01:50:20.164959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.599 [2024-11-19 01:50:20.201118] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:09.857 Running I/O for 10 seconds... 00:09:11.731 5965.00 IOPS, 46.60 MiB/s [2024-11-19T01:50:23.722Z] 6169.50 IOPS, 48.20 MiB/s [2024-11-19T01:50:24.658Z] 6175.00 IOPS, 48.24 MiB/s [2024-11-19T01:50:25.596Z] 6211.75 IOPS, 48.53 MiB/s [2024-11-19T01:50:26.533Z] 6234.60 IOPS, 48.71 MiB/s [2024-11-19T01:50:27.469Z] 6253.83 IOPS, 48.86 MiB/s [2024-11-19T01:50:28.406Z] 6251.14 IOPS, 48.84 MiB/s [2024-11-19T01:50:29.344Z] 6266.12 IOPS, 48.95 MiB/s [2024-11-19T01:50:30.320Z] 6294.78 IOPS, 49.18 MiB/s [2024-11-19T01:50:30.321Z] 6297.10 IOPS, 49.20 MiB/s 00:09:19.706 Latency(us) 00:09:19.706 [2024-11-19T01:50:30.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:19.706 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:19.706 Verification LBA range: start 0x0 length 0x1000 00:09:19.706 Nvme1n1 : 10.01 6298.65 49.21 0.00 0.00 20258.04 1727.77 29789.09 00:09:19.706 [2024-11-19T01:50:30.321Z] =================================================================================================================== 00:09:19.706 [2024-11-19T01:50:30.321Z] Total : 6298.65 49.21 0.00 0.00 20258.04 1727.77 29789.09 00:09:19.965 01:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=77334 00:09:19.965 01:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:19.965 01:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.965 01:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:19.965 01:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:19.965 01:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:19.965 01:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:19.965 01:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:19.965 01:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:19.965 { 00:09:19.965 "params": { 00:09:19.965 "name": "Nvme$subsystem", 00:09:19.965 "trtype": "$TEST_TRANSPORT", 00:09:19.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:19.965 "adrfam": "ipv4", 00:09:19.965 "trsvcid": "$NVMF_PORT", 00:09:19.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:19.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:19.965 "hdgst": ${hdgst:-false}, 00:09:19.965 "ddgst": ${ddgst:-false} 00:09:19.965 }, 00:09:19.965 "method": "bdev_nvme_attach_controller" 00:09:19.965 } 00:09:19.965 EOF 00:09:19.965 )") 00:09:19.965 01:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:19.965 [2024-11-19 01:50:30.440048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.965 [2024-11-19 01:50:30.440112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.965 01:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:19.965 01:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:19.965 01:50:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:19.965 "params": { 00:09:19.965 "name": "Nvme1", 00:09:19.965 "trtype": "tcp", 00:09:19.965 "traddr": "10.0.0.3", 00:09:19.965 "adrfam": "ipv4", 00:09:19.965 "trsvcid": "4420", 00:09:19.965 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:19.965 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:19.965 "hdgst": false, 00:09:19.965 "ddgst": false 00:09:19.965 }, 00:09:19.965 "method": "bdev_nvme_attach_controller" 00:09:19.965 }' 00:09:19.965 [2024-11-19 01:50:30.452010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.965 [2024-11-19 01:50:30.452065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.965 [2024-11-19 01:50:30.463997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.965 [2024-11-19 01:50:30.464055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.965 [2024-11-19 01:50:30.475987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.965 [2024-11-19 01:50:30.476041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.965 [2024-11-19 01:50:30.487992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.965 [2024-11-19 01:50:30.488046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.965 [2024-11-19 01:50:30.499819] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:09:19.965 [2024-11-19 01:50:30.499932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77334 ] 00:09:19.965 [2024-11-19 01:50:30.500002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.965 [2024-11-19 01:50:30.500028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.965 [2024-11-19 01:50:30.512063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.965 [2024-11-19 01:50:30.512116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.965 [2024-11-19 01:50:30.524065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.965 [2024-11-19 01:50:30.524122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.965 [2024-11-19 01:50:30.536028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.965 [2024-11-19 01:50:30.536057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.965 [2024-11-19 01:50:30.548013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.965 [2024-11-19 01:50:30.548057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.965 [2024-11-19 01:50:30.560011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.965 [2024-11-19 01:50:30.560051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.965 [2024-11-19 01:50:30.572017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.965 [2024-11-19 01:50:30.572059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.224 [2024-11-19 01:50:30.584035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.225 [2024-11-19 01:50:30.584077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.225 [2024-11-19 01:50:30.596020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.225 [2024-11-19 01:50:30.596060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.225 [2024-11-19 01:50:30.608021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.225 [2024-11-19 01:50:30.608061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.225 [2024-11-19 01:50:30.620024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.225 [2024-11-19 01:50:30.620064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.225 [2024-11-19 01:50:30.632032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.225 [2024-11-19 01:50:30.632075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.225 [2024-11-19 01:50:30.644036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.225 [2024-11-19 01:50:30.644080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.225 [2024-11-19 01:50:30.653225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.225 [2024-11-19 01:50:30.656044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.225 [2024-11-19 01:50:30.656087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.225 [2024-11-19 01:50:30.668072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.225 [2024-11-19 01:50:30.668127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.225 [2024-11-19 01:50:30.673006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.225 [2024-11-19 01:50:30.680046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.225 [2024-11-19 01:50:30.680073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.225 [2024-11-19 01:50:30.692095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.225 [2024-11-19 01:50:30.692153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.225 [2024-11-19 01:50:30.704103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.225 [2024-11-19 01:50:30.704160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.225 [2024-11-19 01:50:30.710500] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:20.225 [2024-11-19 01:50:30.716098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.225 [2024-11-19 01:50:30.716152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.225 [2024-11-19 01:50:30.728093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.225 [2024-11-19 01:50:30.728148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.225 [2024-11-19 01:50:30.740083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.225 [2024-11-19 01:50:30.740130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.225 [2024-11-19 01:50:30.752098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.225 [2024-11-19 01:50:30.752143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.225 [2024-11-19 01:50:30.764110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.225 [2024-11-19 01:50:30.764156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.225 [2024-11-19 01:50:30.776114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.225 [2024-11-19 01:50:30.776160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.225 [2024-11-19 01:50:30.788123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.225 [2024-11-19 01:50:30.788167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.225 [2024-11-19 01:50:30.800134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.225 [2024-11-19 01:50:30.800182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.225 Running I/O for 5 seconds... 00:09:20.225 [2024-11-19 01:50:30.812139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.225 [2024-11-19 01:50:30.812183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.225 [2024-11-19 01:50:30.831057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.225 [2024-11-19 01:50:30.831121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.484 [2024-11-19 01:50:30.845750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.484 [2024-11-19 01:50:30.845785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.484 [2024-11-19 01:50:30.861128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.484 [2024-11-19 01:50:30.861175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.484 [2024-11-19 01:50:30.870824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.484 [2024-11-19 01:50:30.870873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.484 [2024-11-19 01:50:30.886098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.484 [2024-11-19 01:50:30.886145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.484 [2024-11-19 01:50:30.895387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.484 [2024-11-19 01:50:30.895434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.484 [2024-11-19 01:50:30.911200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.485 [2024-11-19 01:50:30.911247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.485 [2024-11-19 01:50:30.927417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.485 [2024-11-19 01:50:30.927464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.485 [2024-11-19 01:50:30.943953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.485 [2024-11-19 01:50:30.944000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.485 [2024-11-19 01:50:30.960153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.485 [2024-11-19 01:50:30.960201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.485 [2024-11-19 01:50:30.977797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.485 [2024-11-19 01:50:30.977846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.485 [2024-11-19 01:50:30.993489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.485 [2024-11-19 01:50:30.993578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.485 [2024-11-19 01:50:31.012337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.485 [2024-11-19 01:50:31.012384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.485 [2024-11-19 01:50:31.026806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.485 [2024-11-19 01:50:31.026852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.485 [2024-11-19 01:50:31.038815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.485 [2024-11-19 01:50:31.038863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.485 [2024-11-19 01:50:31.053462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.485 [2024-11-19 01:50:31.053509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.485 [2024-11-19 01:50:31.070315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.485 [2024-11-19 01:50:31.070361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.485 [2024-11-19 01:50:31.086183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.485 [2024-11-19 01:50:31.086246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.744 [2024-11-19 01:50:31.103460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.744 [2024-11-19 01:50:31.103507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.744 [2024-11-19 01:50:31.118680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.744 [2024-11-19 01:50:31.118740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.744 [2024-11-19 01:50:31.135114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.744 [2024-11-19 01:50:31.135161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.744 [2024-11-19 01:50:31.151052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.744 [2024-11-19 01:50:31.151098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.744 [2024-11-19 01:50:31.169348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.744 [2024-11-19 01:50:31.169394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.744 [2024-11-19 01:50:31.183337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.744 [2024-11-19 01:50:31.183384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.744 [2024-11-19 01:50:31.199119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.744 [2024-11-19 01:50:31.199167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.744 [2024-11-19 01:50:31.215766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.744 [2024-11-19 01:50:31.215815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.744 [2024-11-19 01:50:31.233065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.744 [2024-11-19 01:50:31.233125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.744 [2024-11-19 01:50:31.249216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.744 [2024-11-19 01:50:31.249279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.744 [2024-11-19 01:50:31.264454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.744 [2024-11-19 01:50:31.264504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.744 [2024-11-19 01:50:31.280027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.744 [2024-11-19 01:50:31.280074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.744 [2024-11-19 01:50:31.289566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.744 [2024-11-19 01:50:31.289611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.744 [2024-11-19 01:50:31.306293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.744 [2024-11-19 01:50:31.306369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.744 [2024-11-19 01:50:31.323684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.744 [2024-11-19 01:50:31.323746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.744 [2024-11-19 01:50:31.340263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.744 [2024-11-19 01:50:31.340310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.744 [2024-11-19 01:50:31.356553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.744 [2024-11-19 01:50:31.356599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.003 [2024-11-19 01:50:31.373446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.003 [2024-11-19 01:50:31.373494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.003 [2024-11-19 01:50:31.390372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.003 [2024-11-19 01:50:31.390419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.003 [2024-11-19 01:50:31.407110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.003 [2024-11-19 01:50:31.407157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.003 [2024-11-19 01:50:31.423292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.003 [2024-11-19 01:50:31.423339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.003 [2024-11-19 01:50:31.441541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.003 [2024-11-19 01:50:31.441587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.003 [2024-11-19 01:50:31.456116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.003 [2024-11-19 01:50:31.456163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.003 [2024-11-19 01:50:31.472914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.003 [2024-11-19 01:50:31.472960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.003 [2024-11-19 01:50:31.489866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.003 [2024-11-19 01:50:31.489917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.003 [2024-11-19 01:50:31.505710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.003 [2024-11-19 01:50:31.505760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.003 [2024-11-19 01:50:31.524404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.003 [2024-11-19 01:50:31.524450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.003 [2024-11-19 01:50:31.538845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.003 [2024-11-19 01:50:31.538895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.003 [2024-11-19 01:50:31.555681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.003 [2024-11-19 01:50:31.555719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.003 [2024-11-19 01:50:31.573230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.003 [2024-11-19 01:50:31.573278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.003 [2024-11-19 01:50:31.587865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.003 [2024-11-19 01:50:31.587911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.003 [2024-11-19 01:50:31.603515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.003 [2024-11-19 01:50:31.603576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.262 [2024-11-19 01:50:31.621760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.262 [2024-11-19 01:50:31.621795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.262 [2024-11-19 01:50:31.636240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.262 [2024-11-19 01:50:31.636286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.262 [2024-11-19 01:50:31.652555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.263 [2024-11-19 01:50:31.652602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.263 [2024-11-19 01:50:31.669535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.263 [2024-11-19 01:50:31.669581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.263 [2024-11-19 01:50:31.685712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.263 [2024-11-19 01:50:31.685760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.263 [2024-11-19 01:50:31.701331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.263 [2024-11-19 01:50:31.701377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.263 [2024-11-19 01:50:31.713239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.263 [2024-11-19 01:50:31.713286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.263 [2024-11-19 01:50:31.729960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.263 [2024-11-19 01:50:31.730038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.263 [2024-11-19 01:50:31.745793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.263 [2024-11-19 01:50:31.745843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.263 [2024-11-19 01:50:31.763849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.263 [2024-11-19 01:50:31.763897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.263 [2024-11-19 01:50:31.779150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.263 [2024-11-19 01:50:31.779197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.263 [2024-11-19 01:50:31.796752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.263 [2024-11-19 01:50:31.796800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.263 [2024-11-19 01:50:31.812220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.263 [2024-11-19 01:50:31.812267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.263 11900.00 IOPS, 92.97 MiB/s [2024-11-19T01:50:31.878Z] [2024-11-19 01:50:31.822999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.263 [2024-11-19 01:50:31.823047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.263 [2024-11-19 01:50:31.837333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.263 [2024-11-19 01:50:31.837380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.263 [2024-11-19 01:50:31.855822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.263 [2024-11-19 01:50:31.855870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.263 [2024-11-19 01:50:31.871110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.263 [2024-11-19 01:50:31.871156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.522 [2024-11-19 01:50:31.887947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.522 [2024-11-19 01:50:31.887994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.522 [2024-11-19 01:50:31.904428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.522 [2024-11-19 01:50:31.904474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.522 [2024-11-19 01:50:31.920259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.522 [2024-11-19 01:50:31.920308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.522 [2024-11-19 01:50:31.930154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.523 [2024-11-19 01:50:31.930189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.523 [2024-11-19 01:50:31.945959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.523 [2024-11-19 01:50:31.946010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.523 [2024-11-19 01:50:31.957401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.523 [2024-11-19 01:50:31.957437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.523 [2024-11-19 01:50:31.973936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.523 [2024-11-19 01:50:31.973973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.523 [2024-11-19 01:50:31.990749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.523 [2024-11-19 01:50:31.990783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.523 [2024-11-19 01:50:32.006534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.523 [2024-11-19 01:50:32.006599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.523 [2024-11-19 01:50:32.024651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.523 [2024-11-19 01:50:32.024685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.523 [2024-11-19 01:50:32.039063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.523 [2024-11-19 01:50:32.039099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.523 [2024-11-19 01:50:32.056359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.523 [2024-11-19 01:50:32.056559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.523 [2024-11-19 01:50:32.070348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.523 [2024-11-19 01:50:32.070383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.523 [2024-11-19 01:50:32.087149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.523 [2024-11-19 01:50:32.087311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.523 [2024-11-19 01:50:32.102065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.523 [2024-11-19 01:50:32.102229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.523 [2024-11-19 01:50:32.118931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.523 [2024-11-19 01:50:32.118967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.523 [2024-11-19 01:50:32.133632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.523 [2024-11-19 01:50:32.133696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.783 [2024-11-19 01:50:32.148398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.783 [2024-11-19 01:50:32.148612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.783 [2024-11-19 01:50:32.165735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.783 [2024-11-19 01:50:32.165775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.783 [2024-11-19 01:50:32.180790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.783 [2024-11-19 01:50:32.180825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.783 [2024-11-19 01:50:32.193206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.783 [2024-11-19 01:50:32.193241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.783 [2024-11-19 01:50:32.210335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.783 [2024-11-19 01:50:32.210370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.783 [2024-11-19 01:50:32.227549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.783 [2024-11-19 01:50:32.227584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.783 [2024-11-19 01:50:32.244876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.783 [2024-11-19 01:50:32.244911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.783 [2024-11-19 01:50:32.260598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.783 [2024-11-19 01:50:32.260652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.783 [2024-11-19 01:50:32.276248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.783 [2024-11-19 01:50:32.276285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.783 [2024-11-19 01:50:32.293289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.783 [2024-11-19 01:50:32.293326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.783 [2024-11-19 01:50:32.308539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.783 [2024-11-19 01:50:32.308622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.783 [2024-11-19 01:50:32.324179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.783 [2024-11-19 01:50:32.324231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.783 [2024-11-19 01:50:32.334370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.783 [2024-11-19 01:50:32.334576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.783 [2024-11-19 01:50:32.350261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.783 [2024-11-19 01:50:32.350438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.783 [2024-11-19 01:50:32.366760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.783 [2024-11-19 01:50:32.366795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.783 [2024-11-19 01:50:32.383189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.783 [2024-11-19 01:50:32.383225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.783 [2024-11-19 01:50:32.400070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.043 [2024-11-19 01:50:32.400250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.043 [2024-11-19 01:50:32.416787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.043 [2024-11-19 01:50:32.416852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.043 [2024-11-19 01:50:32.433300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.043 [2024-11-19 01:50:32.433367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.043 [2024-11-19 01:50:32.449485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.043 [2024-11-19 01:50:32.449581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.043 [2024-11-19 01:50:32.467653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.043 [2024-11-19 01:50:32.467688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.043 [2024-11-19 01:50:32.482587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.043 [2024-11-19 01:50:32.482894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.043 [2024-11-19 01:50:32.498538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.043 [2024-11-19 01:50:32.498869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.043 [2024-11-19 01:50:32.515966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.043 [2024-11-19 01:50:32.516021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.043 [2024-11-19 01:50:32.531818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.043 [2024-11-19 01:50:32.531892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.043 [2024-11-19 01:50:32.549840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.043 [2024-11-19 01:50:32.550147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.043 [2024-11-19 01:50:32.565940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.043 [2024-11-19 01:50:32.565995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.043 [2024-11-19 01:50:32.582112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.043 [2024-11-19 01:50:32.582147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.043 [2024-11-19 01:50:32.591681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.043 [2024-11-19 01:50:32.591719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.043 [2024-11-19 01:50:32.607262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.043 [2024-11-19 01:50:32.607297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.043 [2024-11-19 01:50:32.625156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.043 [2024-11-19 01:50:32.625194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.043 [2024-11-19 01:50:32.640805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.043 [2024-11-19 01:50:32.640884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.043 [2024-11-19 01:50:32.658190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.043 [2024-11-19 01:50:32.658227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.302 [2024-11-19 01:50:32.674453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.302 [2024-11-19 01:50:32.674494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.302 [2024-11-19 01:50:32.691471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.302 [2024-11-19 01:50:32.691526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.303 [2024-11-19 01:50:32.708979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.303 [2024-11-19 01:50:32.709016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.303 [2024-11-19 01:50:32.726028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.303 [2024-11-19 01:50:32.726062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.303 [2024-11-19 01:50:32.741235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.303 [2024-11-19 01:50:32.741286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.303 [2024-11-19 01:50:32.751810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.303 [2024-11-19 01:50:32.751845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.303 [2024-11-19 01:50:32.767090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.303 [2024-11-19 01:50:32.767272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.303 [2024-11-19 01:50:32.782533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.303 [2024-11-19 01:50:32.782723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.303 [2024-11-19 01:50:32.799863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.303 [2024-11-19 01:50:32.799898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.303 11794.50 IOPS, 92.14 MiB/s [2024-11-19T01:50:32.918Z] [2024-11-19 01:50:32.816241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.303 [2024-11-19 01:50:32.816276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.303 [2024-11-19 01:50:32.832362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.303 [2024-11-19 01:50:32.832399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.303 [2024-11-19 01:50:32.851785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.303 [2024-11-19 01:50:32.851821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.303 [2024-11-19 01:50:32.867030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.303 [2024-11-19 01:50:32.867066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.303 [2024-11-19 01:50:32.882584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.303 [2024-11-19 01:50:32.882790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.303 [2024-11-19 01:50:32.893014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.303 [2024-11-19 01:50:32.893067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.303 [2024-11-19 01:50:32.908121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.303 [2024-11-19 01:50:32.908159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.562 [2024-11-19 01:50:32.924141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.562 [2024-11-19 01:50:32.924183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.562 [2024-11-19 01:50:32.934216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.562 [2024-11-19 01:50:32.934255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.562 [2024-11-19 01:50:32.949501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.562 [2024-11-19 01:50:32.949756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.562 [2024-11-19 01:50:32.967224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.562 [2024-11-19 01:50:32.967263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.562 [2024-11-19 01:50:32.982921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.562 [2024-11-19 01:50:32.982957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.562 [2024-11-19 01:50:32.992239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.562 [2024-11-19 01:50:32.992275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.562 [2024-11-19 01:50:33.008339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.562 [2024-11-19 01:50:33.008395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.562 [2024-11-19 01:50:33.024542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.562 [2024-11-19 01:50:33.024601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.562 [2024-11-19 01:50:33.041403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.562 [2024-11-19 01:50:33.041464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.562 [2024-11-19 01:50:33.057041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.562 [2024-11-19 01:50:33.057096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.562 [2024-11-19 01:50:33.067060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.562 [2024-11-19 01:50:33.067109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.562 [2024-11-19 01:50:33.082968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.562 [2024-11-19 01:50:33.083010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.562 [2024-11-19 01:50:33.092793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.562 [2024-11-19 01:50:33.092838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.562 [2024-11-19 01:50:33.107387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.562 [2024-11-19 01:50:33.107444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.562 [2024-11-19 01:50:33.117260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.562 [2024-11-19 01:50:33.117311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.562 [2024-11-19 01:50:33.131862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.562 [2024-11-19 01:50:33.132150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.562 [2024-11-19 01:50:33.147571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.562 [2024-11-19 01:50:33.147625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.562 [2024-11-19 01:50:33.159399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.562 [2024-11-19 01:50:33.159452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.562 [2024-11-19 01:50:33.175379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.562 [2024-11-19 01:50:33.175424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.822 [2024-11-19 01:50:33.191712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.822 [2024-11-19 01:50:33.191748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.822 [2024-11-19 01:50:33.210143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.822 [2024-11-19 01:50:33.210321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.822 [2024-11-19 01:50:33.224429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.822 [2024-11-19 01:50:33.224464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.822 [2024-11-19 01:50:33.239411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.822 [2024-11-19 01:50:33.239445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.822 [2024-11-19 01:50:33.251308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.822 [2024-11-19 01:50:33.251343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.822 [2024-11-19 01:50:33.266885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.822 [2024-11-19 01:50:33.266919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.822 [2024-11-19 01:50:33.283979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.822 [2024-11-19 01:50:33.284145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.822 [2024-11-19 01:50:33.300191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.822 [2024-11-19 01:50:33.300226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.822 [2024-11-19 01:50:33.310132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.822 [2024-11-19 01:50:33.310167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.822 [2024-11-19 01:50:33.325614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.822 [2024-11-19 01:50:33.325681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.822 [2024-11-19 01:50:33.338294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.822 [2024-11-19 01:50:33.338331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.822 [2024-11-19 01:50:33.351362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.822 [2024-11-19 01:50:33.351575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.822 [2024-11-19 01:50:33.370319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.822 [2024-11-19 01:50:33.370482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.822 [2024-11-19 01:50:33.384980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.822 [2024-11-19 01:50:33.385141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.822 [2024-11-19 01:50:33.401780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.822 [2024-11-19 01:50:33.401818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.822 [2024-11-19 01:50:33.416937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.822 [2024-11-19 01:50:33.416971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:22.822 [2024-11-19 01:50:33.426225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:22.822 [2024-11-19 01:50:33.426261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.081 [2024-11-19 01:50:33.442702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.081 [2024-11-19 01:50:33.442769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.081 [2024-11-19 01:50:33.454034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.081 [2024-11-19 01:50:33.454083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.081 [2024-11-19 01:50:33.470227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.081 [2024-11-19 01:50:33.470406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.081 [2024-11-19 01:50:33.487026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.081 [2024-11-19 01:50:33.487061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.081 [2024-11-19 01:50:33.503858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.081 [2024-11-19 01:50:33.503894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.081 [2024-11-19 01:50:33.521594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.081 [2024-11-19 01:50:33.521629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.081 [2024-11-19 01:50:33.537216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.081 [2024-11-19 01:50:33.537376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.081 [2024-11-19 01:50:33.555561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.081 [2024-11-19 01:50:33.555595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.081 [2024-11-19 01:50:33.570489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.081 [2024-11-19 01:50:33.570578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.081 [2024-11-19 01:50:33.580006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.081 [2024-11-19 01:50:33.580042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.081 [2024-11-19 01:50:33.596385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.081 [2024-11-19 01:50:33.596421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.081 [2024-11-19 01:50:33.612389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.081 [2024-11-19 01:50:33.612425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.081 [2024-11-19 01:50:33.630762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.081 [2024-11-19 01:50:33.630797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.081 [2024-11-19 01:50:33.646261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.081 [2024-11-19 01:50:33.646438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.081 [2024-11-19 01:50:33.663739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.081 [2024-11-19 01:50:33.663774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.081 [2024-11-19 01:50:33.679306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.081 [2024-11-19 01:50:33.679341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.081 [2024-11-19 01:50:33.689589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.081 [2024-11-19 01:50:33.689623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.341 [2024-11-19 01:50:33.705118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.341 [2024-11-19 01:50:33.705282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.341 [2024-11-19 01:50:33.720504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.341 [2024-11-19 01:50:33.720569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.341 [2024-11-19 01:50:33.729947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.341 [2024-11-19 01:50:33.730000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.341 [2024-11-19 01:50:33.745578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.341 [2024-11-19 01:50:33.745614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.341 [2024-11-19 01:50:33.761721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.341 [2024-11-19 01:50:33.761758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.341 [2024-11-19 01:50:33.779419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.341 [2024-11-19 01:50:33.779454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.341 [2024-11-19 01:50:33.795348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.341 [2024-11-19 01:50:33.795383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.341 [2024-11-19 01:50:33.804810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.341 [2024-11-19 01:50:33.804847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.341 11850.67 IOPS, 92.58 MiB/s [2024-11-19T01:50:33.956Z] [2024-11-19 01:50:33.820413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.341 [2024-11-19 01:50:33.820604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.341 [2024-11-19 01:50:33.836275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.341 [2024-11-19 01:50:33.836436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.341 [2024-11-19 01:50:33.854414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.341 [2024-11-19 01:50:33.854451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.341 [2024-11-19 01:50:33.870157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.341 [2024-11-19 01:50:33.870192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.341 [2024-11-19 01:50:33.887702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.341 [2024-11-19 01:50:33.887737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.341 [2024-11-19 01:50:33.903742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.341 [2024-11-19 01:50:33.903777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.341 [2024-11-19 01:50:33.921395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.341 [2024-11-19 01:50:33.921430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.341 [2024-11-19 01:50:33.937440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.341 [2024-11-19 01:50:33.937477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.341 [2024-11-19 01:50:33.947954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.341 [2024-11-19 01:50:33.947991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.601 [2024-11-19 01:50:33.963916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.601 [2024-11-19 01:50:33.963953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.601 [2024-11-19 01:50:33.978950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.601 [2024-11-19 01:50:33.979123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.601 [2024-11-19 01:50:33.989416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.601 [2024-11-19 01:50:33.989459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.601 [2024-11-19 01:50:34.004378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.601 [2024-11-19 01:50:34.004456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.601 [2024-11-19 01:50:34.020525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.601 [2024-11-19 01:50:34.020593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.601 [2024-11-19 01:50:34.031979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.601 [2024-11-19 01:50:34.032047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.601 [2024-11-19 01:50:34.048148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.601 [2024-11-19 01:50:34.048208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.601 [2024-11-19 01:50:34.064339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.601 [2024-11-19 01:50:34.064401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.601 [2024-11-19 01:50:34.081337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.601 [2024-11-19 01:50:34.081404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.601 [2024-11-19 01:50:34.097148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.601 [2024-11-19 01:50:34.097196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.601 [2024-11-19 01:50:34.115357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.601 [2024-11-19 01:50:34.115418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.601 [2024-11-19 01:50:34.129305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.601 [2024-11-19 01:50:34.129377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.601 [2024-11-19 01:50:34.145046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.601 [2024-11-19 01:50:34.145332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.601 [2024-11-19 01:50:34.161759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.601 [2024-11-19 01:50:34.161825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.601 [2024-11-19 01:50:34.178597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.601 [2024-11-19 01:50:34.178699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.601 [2024-11-19 01:50:34.194641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.601 [2024-11-19 01:50:34.194705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.601 [2024-11-19 01:50:34.212644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.601 [2024-11-19 01:50:34.212706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.861 [2024-11-19 01:50:34.227598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.861 [2024-11-19 01:50:34.227658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.861 [2024-11-19 01:50:34.244647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.861 [2024-11-19 01:50:34.244693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.861 [2024-11-19 01:50:34.261942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.861 [2024-11-19 01:50:34.262142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.861 [2024-11-19 01:50:34.277815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.861 [2024-11-19 01:50:34.277850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.861 [2024-11-19 01:50:34.287314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.861 [2024-11-19 01:50:34.287493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.861 [2024-11-19 01:50:34.302801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.861 [2024-11-19 01:50:34.302855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.861 [2024-11-19 01:50:34.319101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.861 [2024-11-19 01:50:34.319274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.861 [2024-11-19 01:50:34.329233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.861 [2024-11-19 01:50:34.329439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.861 [2024-11-19 01:50:34.343263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.861 [2024-11-19 01:50:34.343297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.861 [2024-11-19 01:50:34.358299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.861 [2024-11-19 01:50:34.358468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.861 [2024-11-19 01:50:34.375198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.861 [2024-11-19 01:50:34.375232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.861 [2024-11-19 01:50:34.390973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.861 [2024-11-19 01:50:34.391183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.861 [2024-11-19 01:50:34.405963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.861 [2024-11-19 01:50:34.406000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.861 [2024-11-19 01:50:34.423212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.861 [2024-11-19 01:50:34.423387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.861 [2024-11-19 01:50:34.439715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.861 [2024-11-19 01:50:34.439749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.861 [2024-11-19 01:50:34.456977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.861 [2024-11-19 01:50:34.457141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:23.861 [2024-11-19 01:50:34.472759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:23.861 [2024-11-19 01:50:34.472793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.121 [2024-11-19 01:50:34.484846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.121 [2024-11-19 01:50:34.484882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.121 [2024-11-19 01:50:34.499844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.121 [2024-11-19 01:50:34.499880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.121 [2024-11-19 01:50:34.516677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.121 [2024-11-19 01:50:34.516746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.121 [2024-11-19 01:50:34.534137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.121 [2024-11-19 01:50:34.534173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.121 [2024-11-19 01:50:34.549569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.121 [2024-11-19 01:50:34.549636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.121 [2024-11-19 01:50:34.560244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.121 [2024-11-19 01:50:34.560280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.121 [2024-11-19 01:50:34.574500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.121 [2024-11-19 01:50:34.574582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.121 [2024-11-19 01:50:34.585164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.121 [2024-11-19 01:50:34.585200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.121 [2024-11-19 01:50:34.602474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.121 [2024-11-19 01:50:34.602545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.121 [2024-11-19 01:50:34.618237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.121 [2024-11-19 01:50:34.618289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.121 [2024-11-19 01:50:34.634478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.121 [2024-11-19 01:50:34.634548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.121 [2024-11-19 01:50:34.653064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.121 [2024-11-19 01:50:34.653101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.121 [2024-11-19 01:50:34.668140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.121 [2024-11-19 01:50:34.668175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.121 [2024-11-19 01:50:34.680166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.121 [2024-11-19 01:50:34.680201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.121 [2024-11-19 01:50:34.697136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.121 [2024-11-19 01:50:34.697172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.121 [2024-11-19 01:50:34.712441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.121 [2024-11-19 01:50:34.712481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.121 [2024-11-19 01:50:34.729420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.121 [2024-11-19 01:50:34.729470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 [2024-11-19 01:50:34.745442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-11-19 01:50:34.745484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 [2024-11-19 01:50:34.764177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-11-19 01:50:34.764216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 [2024-11-19 01:50:34.778700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-11-19 01:50:34.778736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 [2024-11-19 01:50:34.795497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-11-19 01:50:34.795564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 [2024-11-19 01:50:34.812194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-11-19 01:50:34.812231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 11862.00 IOPS, 92.67 MiB/s [2024-11-19T01:50:34.997Z] [2024-11-19 01:50:34.829252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-11-19 01:50:34.829288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 [2024-11-19 01:50:34.847593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-11-19 01:50:34.847628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 [2024-11-19 01:50:34.862070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-11-19 01:50:34.862252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 [2024-11-19 01:50:34.877826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-11-19 01:50:34.877861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 [2024-11-19 01:50:34.888907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-11-19 01:50:34.889082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 [2024-11-19 01:50:34.905324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-11-19 01:50:34.905358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 [2024-11-19 01:50:34.922069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-11-19 01:50:34.922105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 [2024-11-19 01:50:34.937854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-11-19 01:50:34.937897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 [2024-11-19 01:50:34.949469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-11-19 01:50:34.949554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 [2024-11-19 01:50:34.966426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-11-19 01:50:34.966556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 [2024-11-19 01:50:34.980460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-11-19 01:50:34.980820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.382 [2024-11-19 01:50:34.996358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.382 [2024-11-19 01:50:34.996425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.642 [2024-11-19 01:50:35.013481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.642 [2024-11-19 01:50:35.013567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.642 [2024-11-19 01:50:35.030202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.642 [2024-11-19 01:50:35.030253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.642 [2024-11-19 01:50:35.047755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.642 [2024-11-19 01:50:35.047800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.642 [2024-11-19 01:50:35.062966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.642 [2024-11-19 01:50:35.063212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.642 [2024-11-19 01:50:35.072287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.642 [2024-11-19 01:50:35.072332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.642 [2024-11-19 01:50:35.088450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.642 [2024-11-19 01:50:35.088549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.642 [2024-11-19 01:50:35.097717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.642 [2024-11-19 01:50:35.097769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.642 [2024-11-19 01:50:35.113126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.642 [2024-11-19 01:50:35.113342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.642 [2024-11-19 01:50:35.122670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.642 [2024-11-19 01:50:35.122705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.642 [2024-11-19 01:50:35.138164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.642 [2024-11-19 01:50:35.138198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.642 [2024-11-19 01:50:35.153189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.642 [2024-11-19 01:50:35.153224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.642 [2024-11-19 01:50:35.169934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.642 [2024-11-19 01:50:35.169985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.642 [2024-11-19 01:50:35.185819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.642 [2024-11-19 01:50:35.185857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.642 [2024-11-19 01:50:35.197155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.642 [2024-11-19 01:50:35.197191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.642 [2024-11-19 01:50:35.213330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.642 [2024-11-19 01:50:35.213365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.642 [2024-11-19 01:50:35.228982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.642 [2024-11-19 01:50:35.229018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.642 [2024-11-19 01:50:35.238655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.642 [2024-11-19 01:50:35.238690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.643 [2024-11-19 01:50:35.254081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.643 [2024-11-19 01:50:35.254117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.902 [2024-11-19 01:50:35.268225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.902 [2024-11-19 01:50:35.268260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.902 [2024-11-19 01:50:35.285543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.902 [2024-11-19 01:50:35.285581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.902 [2024-11-19 01:50:35.299340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.902 [2024-11-19 01:50:35.299537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.903 [2024-11-19 01:50:35.314920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.903 [2024-11-19 01:50:35.315118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.903 [2024-11-19 01:50:35.331693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.903 [2024-11-19 01:50:35.331762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.903 [2024-11-19 01:50:35.348107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.903 [2024-11-19 01:50:35.348142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.903 [2024-11-19 01:50:35.365159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.903 [2024-11-19 01:50:35.365194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.903 [2024-11-19 01:50:35.381634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.903 [2024-11-19 01:50:35.381711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.903 [2024-11-19 01:50:35.399370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.903 [2024-11-19 01:50:35.399405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.903 [2024-11-19 01:50:35.414959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.903 [2024-11-19 01:50:35.414994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.903 [2024-11-19 01:50:35.432683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.903 [2024-11-19 01:50:35.432723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.903 [2024-11-19 01:50:35.448726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.903 [2024-11-19 01:50:35.448776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.903 [2024-11-19 01:50:35.465588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.903 [2024-11-19 01:50:35.465640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.903 [2024-11-19 01:50:35.488352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.903 [2024-11-19 01:50:35.488414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.903 [2024-11-19 01:50:35.502950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.903 [2024-11-19 01:50:35.503179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.162 [2024-11-19 01:50:35.519899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.162 [2024-11-19 01:50:35.519950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.162 [2024-11-19 01:50:35.536424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.162 [2024-11-19 01:50:35.536479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.162 [2024-11-19 01:50:35.552232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.162 [2024-11-19 01:50:35.552290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.162 [2024-11-19 01:50:35.562091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.162 [2024-11-19 01:50:35.562324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.162 [2024-11-19 01:50:35.576592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.162 [2024-11-19 01:50:35.576655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.162 [2024-11-19 01:50:35.593526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.162 [2024-11-19 01:50:35.593605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.162 [2024-11-19 01:50:35.608372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.162 [2024-11-19 01:50:35.608663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.162 [2024-11-19 01:50:35.625717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.162 [2024-11-19 01:50:35.625778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.162 [2024-11-19 01:50:35.642203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.162 [2024-11-19 01:50:35.642256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.162 [2024-11-19 01:50:35.659975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.162 [2024-11-19 01:50:35.660257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.162 [2024-11-19 01:50:35.675263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.162 [2024-11-19 01:50:35.675439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.162 [2024-11-19 01:50:35.691561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.162 [2024-11-19 01:50:35.691610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.162 [2024-11-19 01:50:35.707984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.162 [2024-11-19 01:50:35.708019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.162 [2024-11-19 01:50:35.724181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.162 [2024-11-19 01:50:35.724219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.162 [2024-11-19 01:50:35.740263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.162 [2024-11-19 01:50:35.740299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.162 [2024-11-19 01:50:35.757798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.162 [2024-11-19 01:50:35.757838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.162 [2024-11-19 01:50:35.773298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.162 [2024-11-19 01:50:35.773477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.422 [2024-11-19 01:50:35.785020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.422 [2024-11-19 01:50:35.785073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.422 [2024-11-19 01:50:35.797368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.422 [2024-11-19 01:50:35.797437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.422 [2024-11-19 01:50:35.812169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.422 11878.80 IOPS, 92.80 MiB/s [2024-11-19T01:50:36.037Z] [2024-11-19 01:50:35.812346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.422 00:09:25.422 Latency(us) 00:09:25.422 [2024-11-19T01:50:36.037Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.422 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:25.422 Nvme1n1 : 5.01 11880.11 92.81 0.00 0.00 10762.61 4319.42 26333.56 00:09:25.422 [2024-11-19T01:50:36.037Z] =================================================================================================================== 00:09:25.422 [2024-11-19T01:50:36.037Z] Total : 11880.11 92.81 0.00 0.00 10762.61 4319.42 26333.56 00:09:25.422 [2024-11-19 01:50:35.823650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.422 [2024-11-19 01:50:35.823688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.422 [2024-11-19 01:50:35.835645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.422 [2024-11-19 01:50:35.835689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.422 [2024-11-19 01:50:35.847674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.422 [2024-11-19 01:50:35.847727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.422 [2024-11-19 01:50:35.859689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.422 [2024-11-19 01:50:35.859755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.422 [2024-11-19 01:50:35.871679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.422 [2024-11-19 01:50:35.871740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.422 [2024-11-19 01:50:35.883685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.422 [2024-11-19 01:50:35.883769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.422 [2024-11-19 01:50:35.895680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.422 [2024-11-19 01:50:35.895734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.422 [2024-11-19 01:50:35.907659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.422 [2024-11-19 01:50:35.907702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.422 [2024-11-19 01:50:35.919679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.422 [2024-11-19 01:50:35.919728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.422 [2024-11-19 01:50:35.931661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.422 [2024-11-19 01:50:35.931697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.422 [2024-11-19 01:50:35.943666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.422 [2024-11-19 01:50:35.943701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.422 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (77334) - No such process 00:09:25.422 01:50:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 77334 00:09:25.422 01:50:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:25.422 01:50:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.422 01:50:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:25.422 01:50:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.422 01:50:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:25.422 01:50:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.422 01:50:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:25.422 delay0 00:09:25.422 01:50:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.422 01:50:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:25.422 01:50:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.422 01:50:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:25.422 01:50:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.422 01:50:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:09:25.682 [2024-11-19 01:50:36.149236] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:32.249 Initializing NVMe Controllers 00:09:32.249 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:09:32.249 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:32.249 Initialization complete. Launching workers. 00:09:32.249 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 129 00:09:32.249 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 416, failed to submit 33 00:09:32.249 success 310, unsuccessful 106, failed 0 00:09:32.249 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:32.249 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:32.249 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:32.249 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:32.249 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:32.249 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:32.249 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:32.249 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:32.249 rmmod nvme_tcp 00:09:32.249 rmmod nvme_fabrics 00:09:32.249 rmmod nvme_keyring 00:09:32.249 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:32.249 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:32.249 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:32.249 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 77197 ']' 00:09:32.249 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 77197 00:09:32.249 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 77197 ']' 00:09:32.249 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 77197 00:09:32.249 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:32.249 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:32.249 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77197 00:09:32.249 killing process with pid 77197 00:09:32.249 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:32.249 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:32.249 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77197' 00:09:32.249 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 77197 00:09:32.249 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 77197 00:09:32.249 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:32.249 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:32.249 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:32.249 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:32.249 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:32.249 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:32.249 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:32.250 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:32.250 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:32.250 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:32.250 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:32.250 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:32.250 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:32.250 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:32.250 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:32.250 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:32.250 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:32.250 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:32.250 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:32.250 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:32.250 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:32.250 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:32.250 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:32.250 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.250 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:32.250 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.250 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:09:32.250 00:09:32.250 real 0m23.852s 00:09:32.250 user 0m38.903s 00:09:32.250 sys 0m6.742s 00:09:32.250 ************************************ 00:09:32.250 END TEST nvmf_zcopy 00:09:32.250 ************************************ 00:09:32.250 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.250 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.250 01:50:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:32.250 01:50:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:32.250 01:50:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.250 01:50:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:32.250 ************************************ 00:09:32.250 START TEST nvmf_nmic 00:09:32.250 ************************************ 00:09:32.250 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:32.250 * Looking for test storage... 00:09:32.510 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:32.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.510 --rc genhtml_branch_coverage=1 00:09:32.510 --rc genhtml_function_coverage=1 00:09:32.510 --rc genhtml_legend=1 00:09:32.510 --rc geninfo_all_blocks=1 00:09:32.510 --rc geninfo_unexecuted_blocks=1 00:09:32.510 00:09:32.510 ' 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:32.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.510 --rc genhtml_branch_coverage=1 00:09:32.510 --rc genhtml_function_coverage=1 00:09:32.510 --rc genhtml_legend=1 00:09:32.510 --rc geninfo_all_blocks=1 00:09:32.510 --rc geninfo_unexecuted_blocks=1 00:09:32.510 00:09:32.510 ' 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:32.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.510 --rc genhtml_branch_coverage=1 00:09:32.510 --rc genhtml_function_coverage=1 00:09:32.510 --rc genhtml_legend=1 00:09:32.510 --rc geninfo_all_blocks=1 00:09:32.510 --rc geninfo_unexecuted_blocks=1 00:09:32.510 00:09:32.510 ' 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:32.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.510 --rc genhtml_branch_coverage=1 00:09:32.510 --rc genhtml_function_coverage=1 00:09:32.510 --rc genhtml_legend=1 00:09:32.510 --rc geninfo_all_blocks=1 00:09:32.510 --rc geninfo_unexecuted_blocks=1 00:09:32.510 00:09:32.510 ' 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.510 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.511 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.511 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:32.511 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.511 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:32.511 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:32.511 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:32.511 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:32.511 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:32.511 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:32.511 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:32.511 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:32.511 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:32.511 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:32.511 01:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:32.511 Cannot find device "nvmf_init_br" 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:32.511 Cannot find device "nvmf_init_br2" 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:32.511 Cannot find device "nvmf_tgt_br" 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:32.511 Cannot find device "nvmf_tgt_br2" 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:32.511 Cannot find device "nvmf_init_br" 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:32.511 Cannot find device "nvmf_init_br2" 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:32.511 Cannot find device "nvmf_tgt_br" 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:32.511 Cannot find device "nvmf_tgt_br2" 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:32.511 Cannot find device "nvmf_br" 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:32.511 Cannot find device "nvmf_init_if" 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:09:32.511 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:32.770 Cannot find device "nvmf_init_if2" 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:32.770 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:32.770 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:32.770 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:32.770 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:09:32.770 00:09:32.770 --- 10.0.0.3 ping statistics --- 00:09:32.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.770 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:32.770 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:32.770 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:09:32.770 00:09:32.770 --- 10.0.0.4 ping statistics --- 00:09:32.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.770 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:32.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:32.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:09:32.770 00:09:32.770 --- 10.0.0.1 ping statistics --- 00:09:32.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.770 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:32.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:32.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:09:32.770 00:09:32.770 --- 10.0.0.2 ping statistics --- 00:09:32.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.770 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:32.770 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:33.029 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:33.029 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:33.029 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:33.030 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.030 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=77710 00:09:33.030 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:33.030 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 77710 00:09:33.030 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 77710 ']' 00:09:33.030 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.030 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.030 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.030 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.030 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.030 [2024-11-19 01:50:43.471440] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:09:33.030 [2024-11-19 01:50:43.471815] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.030 [2024-11-19 01:50:43.625349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:33.289 [2024-11-19 01:50:43.651932] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:33.289 [2024-11-19 01:50:43.652236] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:33.289 [2024-11-19 01:50:43.652390] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:33.289 [2024-11-19 01:50:43.652570] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:33.289 [2024-11-19 01:50:43.652618] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:33.289 [2024-11-19 01:50:43.653690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.289 [2024-11-19 01:50:43.653831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:33.289 [2024-11-19 01:50:43.653930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.289 [2024-11-19 01:50:43.653930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:33.289 [2024-11-19 01:50:43.687177] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:33.289 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:33.289 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:33.289 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:33.289 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:33.289 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.289 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.289 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:33.289 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.290 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.290 [2024-11-19 01:50:43.826574] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.290 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.290 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:33.290 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.290 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.290 Malloc0 00:09:33.290 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.290 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:33.290 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.290 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.290 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.290 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:33.290 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.290 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.290 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.290 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:33.290 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.290 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.290 [2024-11-19 01:50:43.885001] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:33.290 test case1: single bdev can't be used in multiple subsystems 00:09:33.290 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.290 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:33.290 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:33.290 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.290 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.290 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.290 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:09:33.290 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.290 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.290 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.290 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:33.290 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:33.290 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.290 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.549 [2024-11-19 01:50:43.908827] bdev.c:8180:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:33.549 [2024-11-19 01:50:43.908884] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:33.549 [2024-11-19 01:50:43.908912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.549 request: 00:09:33.549 { 00:09:33.549 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:33.549 "namespace": { 00:09:33.549 "bdev_name": "Malloc0", 00:09:33.549 "no_auto_visible": false 00:09:33.549 }, 00:09:33.549 "method": "nvmf_subsystem_add_ns", 00:09:33.549 "req_id": 1 00:09:33.549 } 00:09:33.549 Got JSON-RPC error response 00:09:33.549 response: 00:09:33.549 { 00:09:33.549 "code": -32602, 00:09:33.549 "message": "Invalid parameters" 00:09:33.549 } 00:09:33.549 Adding namespace failed - expected result. 00:09:33.549 test case2: host connect to nvmf target in multiple paths 00:09:33.549 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:33.549 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:33.549 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:33.549 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:33.549 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:33.549 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:09:33.549 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.549 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.549 [2024-11-19 01:50:43.920968] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:09:33.549 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.549 01:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid=7cdc77f7-6c10-48d3-83fa-703a290bdf89 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:33.549 01:50:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid=7cdc77f7-6c10-48d3-83fa-703a290bdf89 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:09:33.824 01:50:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:33.824 01:50:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:33.824 01:50:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:33.824 01:50:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:33.824 01:50:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:35.769 01:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:35.769 01:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:35.769 01:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:35.769 01:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:35.769 01:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:35.769 01:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:35.769 01:50:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:35.769 [global] 00:09:35.769 thread=1 00:09:35.769 invalidate=1 00:09:35.769 rw=write 00:09:35.769 time_based=1 00:09:35.769 runtime=1 00:09:35.769 ioengine=libaio 00:09:35.769 direct=1 00:09:35.769 bs=4096 00:09:35.769 iodepth=1 00:09:35.769 norandommap=0 00:09:35.769 numjobs=1 00:09:35.769 00:09:35.769 verify_dump=1 00:09:35.769 verify_backlog=512 00:09:35.769 verify_state_save=0 00:09:35.769 do_verify=1 00:09:35.769 verify=crc32c-intel 00:09:35.769 [job0] 00:09:35.769 filename=/dev/nvme0n1 00:09:35.769 Could not set queue depth (nvme0n1) 00:09:35.769 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:35.769 fio-3.35 00:09:35.769 Starting 1 thread 00:09:37.148 00:09:37.148 job0: (groupid=0, jobs=1): err= 0: pid=77794: Tue Nov 19 01:50:47 2024 00:09:37.148 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:09:37.148 slat (nsec): min=11432, max=63587, avg=14285.82, stdev=4545.49 00:09:37.148 clat (usec): min=126, max=316, avg=172.97, stdev=22.65 00:09:37.148 lat (usec): min=138, max=328, avg=187.25, stdev=23.43 00:09:37.148 clat percentiles (usec): 00:09:37.148 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 149], 20.00th=[ 153], 00:09:37.148 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 169], 60.00th=[ 176], 00:09:37.148 | 70.00th=[ 184], 80.00th=[ 192], 90.00th=[ 204], 95.00th=[ 217], 00:09:37.148 | 99.00th=[ 237], 99.50th=[ 249], 99.90th=[ 269], 99.95th=[ 297], 00:09:37.148 | 99.99th=[ 318] 00:09:37.148 write: IOPS=3119, BW=12.2MiB/s (12.8MB/s)(12.2MiB/1001msec); 0 zone resets 00:09:37.148 slat (usec): min=17, max=109, avg=22.70, stdev= 6.78 00:09:37.148 clat (usec): min=78, max=699, avg=109.75, stdev=21.86 00:09:37.148 lat (usec): min=96, max=751, avg=132.45, stdev=23.82 00:09:37.148 clat percentiles (usec): 00:09:37.148 | 1.00th=[ 84], 5.00th=[ 88], 10.00th=[ 90], 20.00th=[ 94], 00:09:37.148 | 30.00th=[ 97], 40.00th=[ 100], 50.00th=[ 105], 60.00th=[ 111], 00:09:37.148 | 70.00th=[ 117], 80.00th=[ 125], 90.00th=[ 137], 95.00th=[ 147], 00:09:37.148 | 99.00th=[ 169], 99.50th=[ 184], 99.90th=[ 229], 99.95th=[ 260], 00:09:37.148 | 99.99th=[ 701] 00:09:37.148 bw ( KiB/s): min=12288, max=12288, per=98.47%, avg=12288.00, stdev= 0.00, samples=1 00:09:37.148 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:37.148 lat (usec) : 100=19.69%, 250=80.03%, 500=0.26%, 750=0.02% 00:09:37.148 cpu : usr=2.40%, sys=9.20%, ctx=6195, majf=0, minf=5 00:09:37.148 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:37.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.148 issued rwts: total=3072,3123,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.148 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:37.148 00:09:37.148 Run status group 0 (all jobs): 00:09:37.148 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:37.148 WRITE: bw=12.2MiB/s (12.8MB/s), 12.2MiB/s-12.2MiB/s (12.8MB/s-12.8MB/s), io=12.2MiB (12.8MB), run=1001-1001msec 00:09:37.148 00:09:37.148 Disk stats (read/write): 00:09:37.148 nvme0n1: ios=2610/3068, merge=0/0, ticks=480/393, in_queue=873, util=91.38% 00:09:37.148 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:37.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:37.148 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:37.148 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:37.148 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:37.148 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:37.148 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:37.148 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:37.148 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:37.148 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:37.148 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:37.148 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:37.148 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:37.148 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:37.148 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:37.148 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:37.148 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:37.148 rmmod nvme_tcp 00:09:37.148 rmmod nvme_fabrics 00:09:37.148 rmmod nvme_keyring 00:09:37.148 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:37.148 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:37.148 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:37.149 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 77710 ']' 00:09:37.149 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 77710 00:09:37.149 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 77710 ']' 00:09:37.149 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 77710 00:09:37.149 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:37.149 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:37.149 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77710 00:09:37.149 killing process with pid 77710 00:09:37.149 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:37.149 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:37.149 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77710' 00:09:37.149 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 77710 00:09:37.149 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 77710 00:09:37.408 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:37.408 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:37.408 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:37.408 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:37.408 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:37.408 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:37.408 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:37.408 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:37.408 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:37.408 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:37.408 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:37.408 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:37.408 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:37.408 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:37.408 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:37.408 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:37.408 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:37.408 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:37.408 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:37.408 01:50:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:37.408 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:37.667 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:37.667 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:37.667 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.667 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.667 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.667 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:09:37.667 00:09:37.667 real 0m5.291s 00:09:37.667 user 0m15.581s 00:09:37.667 sys 0m2.245s 00:09:37.667 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.667 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:37.667 ************************************ 00:09:37.667 END TEST nvmf_nmic 00:09:37.667 ************************************ 00:09:37.667 01:50:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:37.667 01:50:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:37.667 01:50:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.667 01:50:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:37.667 ************************************ 00:09:37.667 START TEST nvmf_fio_target 00:09:37.667 ************************************ 00:09:37.667 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:37.667 * Looking for test storage... 00:09:37.667 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:37.667 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:37.667 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:37.667 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:37.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.927 --rc genhtml_branch_coverage=1 00:09:37.927 --rc genhtml_function_coverage=1 00:09:37.927 --rc genhtml_legend=1 00:09:37.927 --rc geninfo_all_blocks=1 00:09:37.927 --rc geninfo_unexecuted_blocks=1 00:09:37.927 00:09:37.927 ' 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:37.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.927 --rc genhtml_branch_coverage=1 00:09:37.927 --rc genhtml_function_coverage=1 00:09:37.927 --rc genhtml_legend=1 00:09:37.927 --rc geninfo_all_blocks=1 00:09:37.927 --rc geninfo_unexecuted_blocks=1 00:09:37.927 00:09:37.927 ' 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:37.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.927 --rc genhtml_branch_coverage=1 00:09:37.927 --rc genhtml_function_coverage=1 00:09:37.927 --rc genhtml_legend=1 00:09:37.927 --rc geninfo_all_blocks=1 00:09:37.927 --rc geninfo_unexecuted_blocks=1 00:09:37.927 00:09:37.927 ' 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:37.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.927 --rc genhtml_branch_coverage=1 00:09:37.927 --rc genhtml_function_coverage=1 00:09:37.927 --rc genhtml_legend=1 00:09:37.927 --rc geninfo_all_blocks=1 00:09:37.927 --rc geninfo_unexecuted_blocks=1 00:09:37.927 00:09:37.927 ' 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.927 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:37.928 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:37.928 Cannot find device "nvmf_init_br" 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:37.928 Cannot find device "nvmf_init_br2" 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:37.928 Cannot find device "nvmf_tgt_br" 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:37.928 Cannot find device "nvmf_tgt_br2" 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:37.928 Cannot find device "nvmf_init_br" 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:37.928 Cannot find device "nvmf_init_br2" 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:37.928 Cannot find device "nvmf_tgt_br" 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:37.928 Cannot find device "nvmf_tgt_br2" 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:37.928 Cannot find device "nvmf_br" 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:37.928 Cannot find device "nvmf_init_if" 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:37.928 Cannot find device "nvmf_init_if2" 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:37.928 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:37.928 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:37.928 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:37.929 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:38.188 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:38.188 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:38.188 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:38.188 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:38.188 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:38.188 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:38.188 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:38.188 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:38.188 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:38.188 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:38.188 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:38.188 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:38.188 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:38.188 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:38.188 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:38.188 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:38.188 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:38.188 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:38.188 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:38.188 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:38.188 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:38.188 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:38.188 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:38.188 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:38.188 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:38.188 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:38.188 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:38.188 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:38.188 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:09:38.188 00:09:38.188 --- 10.0.0.3 ping statistics --- 00:09:38.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.188 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:09:38.188 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:38.188 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:38.188 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:09:38.188 00:09:38.188 --- 10.0.0.4 ping statistics --- 00:09:38.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.188 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:09:38.188 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:38.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:38.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:09:38.189 00:09:38.189 --- 10.0.0.1 ping statistics --- 00:09:38.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.189 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:09:38.189 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:38.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:38.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:09:38.189 00:09:38.189 --- 10.0.0.2 ping statistics --- 00:09:38.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.189 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:09:38.189 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:38.189 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:09:38.189 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:38.189 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:38.189 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:38.189 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:38.189 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:38.189 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:38.189 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:38.189 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:38.189 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:38.189 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:38.189 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:38.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.189 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=78023 00:09:38.189 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 78023 00:09:38.189 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 78023 ']' 00:09:38.189 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:38.189 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.189 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:38.189 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.189 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:38.189 01:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:38.189 [2024-11-19 01:50:48.801400] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:09:38.189 [2024-11-19 01:50:48.801698] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.448 [2024-11-19 01:50:48.948366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:38.448 [2024-11-19 01:50:48.968245] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:38.448 [2024-11-19 01:50:48.968613] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:38.448 [2024-11-19 01:50:48.968774] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:38.448 [2024-11-19 01:50:48.968892] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:38.448 [2024-11-19 01:50:48.968949] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:38.448 [2024-11-19 01:50:48.969831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.448 [2024-11-19 01:50:48.969968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:38.448 [2024-11-19 01:50:48.970542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.448 [2024-11-19 01:50:48.971007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:38.448 [2024-11-19 01:50:48.999401] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:38.448 01:50:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:38.448 01:50:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:38.448 01:50:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:38.448 01:50:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:38.448 01:50:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:38.707 01:50:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:38.707 01:50:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:38.707 [2024-11-19 01:50:49.324401] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:38.965 01:50:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:39.223 01:50:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:39.223 01:50:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:39.482 01:50:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:39.482 01:50:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:39.742 01:50:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:39.742 01:50:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:40.001 01:50:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:40.001 01:50:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:40.260 01:50:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:40.519 01:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:40.519 01:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:40.777 01:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:40.777 01:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:41.036 01:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:41.036 01:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:41.295 01:50:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:41.555 01:50:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:41.555 01:50:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:41.814 01:50:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:41.814 01:50:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:42.073 01:50:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:42.332 [2024-11-19 01:50:52.889889] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:42.332 01:50:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:42.591 01:50:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:42.849 01:50:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid=7cdc77f7-6c10-48d3-83fa-703a290bdf89 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:43.108 01:50:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:43.108 01:50:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:43.108 01:50:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:43.108 01:50:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:43.108 01:50:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:43.108 01:50:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:45.013 01:50:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:45.013 01:50:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:45.013 01:50:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:45.013 01:50:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:45.013 01:50:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:45.013 01:50:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:45.013 01:50:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:45.013 [global] 00:09:45.013 thread=1 00:09:45.013 invalidate=1 00:09:45.013 rw=write 00:09:45.013 time_based=1 00:09:45.013 runtime=1 00:09:45.013 ioengine=libaio 00:09:45.013 direct=1 00:09:45.013 bs=4096 00:09:45.013 iodepth=1 00:09:45.013 norandommap=0 00:09:45.013 numjobs=1 00:09:45.013 00:09:45.013 verify_dump=1 00:09:45.013 verify_backlog=512 00:09:45.013 verify_state_save=0 00:09:45.013 do_verify=1 00:09:45.013 verify=crc32c-intel 00:09:45.013 [job0] 00:09:45.013 filename=/dev/nvme0n1 00:09:45.013 [job1] 00:09:45.013 filename=/dev/nvme0n2 00:09:45.013 [job2] 00:09:45.013 filename=/dev/nvme0n3 00:09:45.013 [job3] 00:09:45.013 filename=/dev/nvme0n4 00:09:45.272 Could not set queue depth (nvme0n1) 00:09:45.272 Could not set queue depth (nvme0n2) 00:09:45.272 Could not set queue depth (nvme0n3) 00:09:45.272 Could not set queue depth (nvme0n4) 00:09:45.272 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:45.272 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:45.272 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:45.272 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:45.272 fio-3.35 00:09:45.272 Starting 4 threads 00:09:46.650 00:09:46.650 job0: (groupid=0, jobs=1): err= 0: pid=78200: Tue Nov 19 01:50:56 2024 00:09:46.650 read: IOPS=2940, BW=11.5MiB/s (12.0MB/s)(11.5MiB/1001msec) 00:09:46.650 slat (nsec): min=10996, max=41391, avg=13712.95, stdev=3306.04 00:09:46.650 clat (usec): min=131, max=808, avg=168.86, stdev=20.71 00:09:46.650 lat (usec): min=144, max=820, avg=182.58, stdev=20.89 00:09:46.650 clat percentiles (usec): 00:09:46.650 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:09:46.650 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 169], 00:09:46.650 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 190], 95.00th=[ 196], 00:09:46.650 | 99.00th=[ 210], 99.50th=[ 212], 99.90th=[ 227], 99.95th=[ 676], 00:09:46.650 | 99.99th=[ 807] 00:09:46.650 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:46.650 slat (nsec): min=15490, max=88732, avg=22282.62, stdev=5410.81 00:09:46.650 clat (usec): min=98, max=1905, avg=124.66, stdev=34.74 00:09:46.650 lat (usec): min=117, max=1929, avg=146.94, stdev=35.21 00:09:46.650 clat percentiles (usec): 00:09:46.650 | 1.00th=[ 103], 5.00th=[ 108], 10.00th=[ 111], 20.00th=[ 114], 00:09:46.650 | 30.00th=[ 117], 40.00th=[ 120], 50.00th=[ 122], 60.00th=[ 125], 00:09:46.650 | 70.00th=[ 129], 80.00th=[ 133], 90.00th=[ 143], 95.00th=[ 149], 00:09:46.650 | 99.00th=[ 161], 99.50th=[ 169], 99.90th=[ 229], 99.95th=[ 310], 00:09:46.650 | 99.99th=[ 1909] 00:09:46.650 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:09:46.650 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:46.650 lat (usec) : 100=0.13%, 250=99.80%, 500=0.02%, 750=0.02%, 1000=0.02% 00:09:46.650 lat (msec) : 2=0.02% 00:09:46.650 cpu : usr=2.80%, sys=8.50%, ctx=6016, majf=0, minf=11 00:09:46.650 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:46.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.650 issued rwts: total=2943,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.650 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:46.650 job1: (groupid=0, jobs=1): err= 0: pid=78201: Tue Nov 19 01:50:56 2024 00:09:46.650 read: IOPS=1958, BW=7832KiB/s (8020kB/s)(7832KiB/1000msec) 00:09:46.650 slat (nsec): min=8755, max=37653, avg=11006.78, stdev=2985.01 00:09:46.650 clat (usec): min=219, max=715, avg=261.41, stdev=22.45 00:09:46.650 lat (usec): min=229, max=728, avg=272.42, stdev=22.75 00:09:46.650 clat percentiles (usec): 00:09:46.650 | 1.00th=[ 229], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 247], 00:09:46.650 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 260], 60.00th=[ 265], 00:09:46.650 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 289], 00:09:46.650 | 99.00th=[ 306], 99.50th=[ 326], 99.90th=[ 652], 99.95th=[ 717], 00:09:46.650 | 99.99th=[ 717] 00:09:46.650 write: IOPS=2048, BW=8192KiB/s (8389kB/s)(8192KiB/1000msec); 0 zone resets 00:09:46.650 slat (usec): min=11, max=101, avg=17.80, stdev= 5.03 00:09:46.650 clat (usec): min=107, max=294, avg=207.20, stdev=17.39 00:09:46.650 lat (usec): min=179, max=312, avg=225.01, stdev=17.48 00:09:46.650 clat percentiles (usec): 00:09:46.650 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 194], 00:09:46.650 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 210], 00:09:46.650 | 70.00th=[ 217], 80.00th=[ 221], 90.00th=[ 231], 95.00th=[ 239], 00:09:46.650 | 99.00th=[ 251], 99.50th=[ 255], 99.90th=[ 277], 99.95th=[ 289], 00:09:46.650 | 99.99th=[ 293] 00:09:46.650 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:09:46.650 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:46.650 lat (usec) : 250=64.15%, 500=35.80%, 750=0.05% 00:09:46.650 cpu : usr=1.60%, sys=4.50%, ctx=4008, majf=0, minf=13 00:09:46.650 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:46.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.650 issued rwts: total=1958,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.650 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:46.650 job2: (groupid=0, jobs=1): err= 0: pid=78202: Tue Nov 19 01:50:56 2024 00:09:46.650 read: IOPS=2629, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1001msec) 00:09:46.650 slat (nsec): min=11444, max=54327, avg=15452.78, stdev=4696.12 00:09:46.650 clat (usec): min=146, max=369, avg=177.65, stdev=16.62 00:09:46.650 lat (usec): min=158, max=392, avg=193.10, stdev=17.82 00:09:46.650 clat percentiles (usec): 00:09:46.650 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:09:46.650 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 182], 00:09:46.650 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 200], 95.00th=[ 208], 00:09:46.650 | 99.00th=[ 223], 99.50th=[ 231], 99.90th=[ 247], 99.95th=[ 326], 00:09:46.650 | 99.99th=[ 371] 00:09:46.650 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:46.650 slat (nsec): min=14260, max=95438, avg=21414.13, stdev=4889.16 00:09:46.650 clat (usec): min=99, max=427, avg=135.02, stdev=18.35 00:09:46.650 lat (usec): min=117, max=452, avg=156.43, stdev=19.32 00:09:46.650 clat percentiles (usec): 00:09:46.650 | 1.00th=[ 106], 5.00th=[ 115], 10.00th=[ 118], 20.00th=[ 123], 00:09:46.650 | 30.00th=[ 127], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 137], 00:09:46.650 | 70.00th=[ 141], 80.00th=[ 147], 90.00th=[ 155], 95.00th=[ 163], 00:09:46.650 | 99.00th=[ 184], 99.50th=[ 235], 99.90th=[ 281], 99.95th=[ 400], 00:09:46.650 | 99.99th=[ 429] 00:09:46.650 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:09:46.650 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:46.650 lat (usec) : 100=0.04%, 250=99.81%, 500=0.16% 00:09:46.650 cpu : usr=2.30%, sys=8.60%, ctx=5708, majf=0, minf=3 00:09:46.650 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:46.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.650 issued rwts: total=2632,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.650 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:46.650 job3: (groupid=0, jobs=1): err= 0: pid=78203: Tue Nov 19 01:50:56 2024 00:09:46.650 read: IOPS=1956, BW=7824KiB/s (8012kB/s)(7832KiB/1001msec) 00:09:46.650 slat (nsec): min=10756, max=49868, avg=15168.25, stdev=3286.33 00:09:46.650 clat (usec): min=215, max=782, avg=256.79, stdev=22.55 00:09:46.650 lat (usec): min=229, max=806, avg=271.96, stdev=22.83 00:09:46.650 clat percentiles (usec): 00:09:46.650 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 241], 00:09:46.650 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 260], 00:09:46.650 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 285], 00:09:46.650 | 99.00th=[ 306], 99.50th=[ 318], 99.90th=[ 578], 99.95th=[ 783], 00:09:46.650 | 99.99th=[ 783] 00:09:46.650 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:46.650 slat (nsec): min=15342, max=71899, avg=23146.36, stdev=5217.32 00:09:46.650 clat (usec): min=159, max=283, avg=201.44, stdev=16.47 00:09:46.650 lat (usec): min=179, max=306, avg=224.58, stdev=17.29 00:09:46.650 clat percentiles (usec): 00:09:46.650 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 188], 00:09:46.651 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 204], 00:09:46.651 | 70.00th=[ 210], 80.00th=[ 215], 90.00th=[ 223], 95.00th=[ 231], 00:09:46.651 | 99.00th=[ 241], 99.50th=[ 247], 99.90th=[ 269], 99.95th=[ 281], 00:09:46.651 | 99.99th=[ 285] 00:09:46.651 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:09:46.651 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:46.651 lat (usec) : 250=69.12%, 500=30.83%, 750=0.02%, 1000=0.02% 00:09:46.651 cpu : usr=2.00%, sys=6.50%, ctx=4006, majf=0, minf=11 00:09:46.651 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:46.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.651 issued rwts: total=1958,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.651 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:46.651 00:09:46.651 Run status group 0 (all jobs): 00:09:46.651 READ: bw=37.0MiB/s (38.8MB/s), 7824KiB/s-11.5MiB/s (8012kB/s-12.0MB/s), io=37.1MiB (38.9MB), run=1000-1001msec 00:09:46.651 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1000-1001msec 00:09:46.651 00:09:46.651 Disk stats (read/write): 00:09:46.651 nvme0n1: ios=2610/2583, merge=0/0, ticks=446/342, in_queue=788, util=87.56% 00:09:46.651 nvme0n2: ios=1561/1928, merge=0/0, ticks=396/341, in_queue=737, util=87.80% 00:09:46.651 nvme0n3: ios=2305/2560, merge=0/0, ticks=419/373, in_queue=792, util=89.21% 00:09:46.651 nvme0n4: ios=1536/1928, merge=0/0, ticks=394/397, in_queue=791, util=89.77% 00:09:46.651 01:50:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:46.651 [global] 00:09:46.651 thread=1 00:09:46.651 invalidate=1 00:09:46.651 rw=randwrite 00:09:46.651 time_based=1 00:09:46.651 runtime=1 00:09:46.651 ioengine=libaio 00:09:46.651 direct=1 00:09:46.651 bs=4096 00:09:46.651 iodepth=1 00:09:46.651 norandommap=0 00:09:46.651 numjobs=1 00:09:46.651 00:09:46.651 verify_dump=1 00:09:46.651 verify_backlog=512 00:09:46.651 verify_state_save=0 00:09:46.651 do_verify=1 00:09:46.651 verify=crc32c-intel 00:09:46.651 [job0] 00:09:46.651 filename=/dev/nvme0n1 00:09:46.651 [job1] 00:09:46.651 filename=/dev/nvme0n2 00:09:46.651 [job2] 00:09:46.651 filename=/dev/nvme0n3 00:09:46.651 [job3] 00:09:46.651 filename=/dev/nvme0n4 00:09:46.651 Could not set queue depth (nvme0n1) 00:09:46.651 Could not set queue depth (nvme0n2) 00:09:46.651 Could not set queue depth (nvme0n3) 00:09:46.651 Could not set queue depth (nvme0n4) 00:09:46.651 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.651 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.651 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.651 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.651 fio-3.35 00:09:46.651 Starting 4 threads 00:09:48.027 00:09:48.027 job0: (groupid=0, jobs=1): err= 0: pid=78267: Tue Nov 19 01:50:58 2024 00:09:48.027 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:09:48.027 slat (nsec): min=12093, max=30043, avg=13480.22, stdev=1585.32 00:09:48.027 clat (usec): min=131, max=538, avg=159.87, stdev=12.23 00:09:48.027 lat (usec): min=145, max=551, avg=173.35, stdev=12.43 00:09:48.027 clat percentiles (usec): 00:09:48.027 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 151], 00:09:48.027 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 161], 00:09:48.027 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 174], 95.00th=[ 178], 00:09:48.027 | 99.00th=[ 188], 99.50th=[ 194], 99.90th=[ 202], 99.95th=[ 221], 00:09:48.027 | 99.99th=[ 537] 00:09:48.027 write: IOPS=3213, BW=12.6MiB/s (13.2MB/s)(12.6MiB/1001msec); 0 zone resets 00:09:48.027 slat (usec): min=14, max=101, avg=20.32, stdev= 3.17 00:09:48.027 clat (usec): min=90, max=582, avg=121.58, stdev=16.21 00:09:48.027 lat (usec): min=109, max=602, avg=141.90, stdev=16.66 00:09:48.027 clat percentiles (usec): 00:09:48.027 | 1.00th=[ 100], 5.00th=[ 105], 10.00th=[ 109], 20.00th=[ 113], 00:09:48.027 | 30.00th=[ 116], 40.00th=[ 119], 50.00th=[ 121], 60.00th=[ 124], 00:09:48.027 | 70.00th=[ 127], 80.00th=[ 130], 90.00th=[ 135], 95.00th=[ 139], 00:09:48.027 | 99.00th=[ 151], 99.50th=[ 157], 99.90th=[ 277], 99.95th=[ 474], 00:09:48.027 | 99.99th=[ 586] 00:09:48.027 bw ( KiB/s): min=12880, max=12880, per=25.96%, avg=12880.00, stdev= 0.00, samples=1 00:09:48.027 iops : min= 3220, max= 3220, avg=3220.00, stdev= 0.00, samples=1 00:09:48.027 lat (usec) : 100=0.59%, 250=99.30%, 500=0.08%, 750=0.03% 00:09:48.027 cpu : usr=1.60%, sys=9.20%, ctx=6290, majf=0, minf=11 00:09:48.027 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:48.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.027 issued rwts: total=3072,3217,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.027 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:48.027 job1: (groupid=0, jobs=1): err= 0: pid=78268: Tue Nov 19 01:50:58 2024 00:09:48.027 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:09:48.027 slat (nsec): min=11687, max=27031, avg=12967.23, stdev=1233.69 00:09:48.027 clat (usec): min=134, max=547, avg=164.45, stdev=12.93 00:09:48.027 lat (usec): min=146, max=560, avg=177.41, stdev=12.98 00:09:48.027 clat percentiles (usec): 00:09:48.027 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 155], 00:09:48.027 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 165], 00:09:48.027 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 180], 95.00th=[ 184], 00:09:48.027 | 99.00th=[ 196], 99.50th=[ 200], 99.90th=[ 210], 99.95th=[ 212], 00:09:48.027 | 99.99th=[ 545] 00:09:48.027 write: IOPS=3069, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:48.027 slat (nsec): min=15238, max=95805, avg=20659.91, stdev=3747.84 00:09:48.027 clat (usec): min=95, max=1499, avg=123.82, stdev=28.16 00:09:48.027 lat (usec): min=115, max=1519, avg=144.48, stdev=28.59 00:09:48.027 clat percentiles (usec): 00:09:48.027 | 1.00th=[ 103], 5.00th=[ 109], 10.00th=[ 112], 20.00th=[ 115], 00:09:48.027 | 30.00th=[ 118], 40.00th=[ 121], 50.00th=[ 123], 60.00th=[ 125], 00:09:48.027 | 70.00th=[ 128], 80.00th=[ 133], 90.00th=[ 137], 95.00th=[ 143], 00:09:48.027 | 99.00th=[ 153], 99.50th=[ 159], 99.90th=[ 174], 99.95th=[ 578], 00:09:48.027 | 99.99th=[ 1500] 00:09:48.027 bw ( KiB/s): min=12288, max=12288, per=24.76%, avg=12288.00, stdev= 0.00, samples=1 00:09:48.027 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:48.027 lat (usec) : 100=0.08%, 250=99.87%, 750=0.03% 00:09:48.027 lat (msec) : 2=0.02% 00:09:48.027 cpu : usr=2.50%, sys=8.30%, ctx=6145, majf=0, minf=13 00:09:48.027 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:48.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.027 issued rwts: total=3072,3073,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.027 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:48.027 job2: (groupid=0, jobs=1): err= 0: pid=78269: Tue Nov 19 01:50:58 2024 00:09:48.027 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:48.027 slat (nsec): min=11275, max=33285, avg=13668.54, stdev=1939.78 00:09:48.027 clat (usec): min=153, max=580, avg=182.35, stdev=18.07 00:09:48.027 lat (usec): min=166, max=604, avg=196.02, stdev=18.45 00:09:48.027 clat percentiles (usec): 00:09:48.027 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:09:48.027 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 184], 00:09:48.027 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 206], 00:09:48.027 | 99.00th=[ 217], 99.50th=[ 225], 99.90th=[ 494], 99.95th=[ 570], 00:09:48.028 | 99.99th=[ 578] 00:09:48.028 write: IOPS=3052, BW=11.9MiB/s (12.5MB/s)(11.9MiB/1001msec); 0 zone resets 00:09:48.028 slat (nsec): min=14879, max=84949, avg=20919.93, stdev=3708.69 00:09:48.028 clat (usec): min=108, max=398, avg=139.00, stdev=11.50 00:09:48.028 lat (usec): min=128, max=418, avg=159.92, stdev=12.22 00:09:48.028 clat percentiles (usec): 00:09:48.028 | 1.00th=[ 119], 5.00th=[ 124], 10.00th=[ 127], 20.00th=[ 130], 00:09:48.028 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 141], 00:09:48.028 | 70.00th=[ 145], 80.00th=[ 147], 90.00th=[ 153], 95.00th=[ 157], 00:09:48.028 | 99.00th=[ 169], 99.50th=[ 174], 99.90th=[ 186], 99.95th=[ 194], 00:09:48.028 | 99.99th=[ 400] 00:09:48.028 bw ( KiB/s): min=12288, max=12288, per=24.76%, avg=12288.00, stdev= 0.00, samples=1 00:09:48.028 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:48.028 lat (usec) : 250=99.89%, 500=0.07%, 750=0.04% 00:09:48.028 cpu : usr=1.80%, sys=8.20%, ctx=5616, majf=0, minf=11 00:09:48.028 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:48.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.028 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.028 issued rwts: total=2560,3056,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.028 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:48.028 job3: (groupid=0, jobs=1): err= 0: pid=78270: Tue Nov 19 01:50:58 2024 00:09:48.028 read: IOPS=2678, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1001msec) 00:09:48.028 slat (nsec): min=11051, max=35504, avg=12864.41, stdev=1917.39 00:09:48.028 clat (usec): min=141, max=2646, avg=174.76, stdev=51.42 00:09:48.028 lat (usec): min=154, max=2669, avg=187.63, stdev=51.70 00:09:48.028 clat percentiles (usec): 00:09:48.028 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:09:48.028 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 176], 00:09:48.028 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 190], 95.00th=[ 196], 00:09:48.028 | 99.00th=[ 221], 99.50th=[ 277], 99.90th=[ 388], 99.95th=[ 676], 00:09:48.028 | 99.99th=[ 2638] 00:09:48.028 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:48.028 slat (nsec): min=14500, max=72978, avg=22050.67, stdev=5872.93 00:09:48.028 clat (usec): min=103, max=1644, avg=136.29, stdev=29.96 00:09:48.028 lat (usec): min=122, max=1664, avg=158.34, stdev=30.65 00:09:48.028 clat percentiles (usec): 00:09:48.028 | 1.00th=[ 115], 5.00th=[ 121], 10.00th=[ 123], 20.00th=[ 127], 00:09:48.028 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 137], 00:09:48.028 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 157], 00:09:48.028 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 265], 99.95th=[ 302], 00:09:48.028 | 99.99th=[ 1647] 00:09:48.028 bw ( KiB/s): min=12288, max=12288, per=24.76%, avg=12288.00, stdev= 0.00, samples=1 00:09:48.028 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:48.028 lat (usec) : 250=99.55%, 500=0.40%, 750=0.02% 00:09:48.028 lat (msec) : 2=0.02%, 4=0.02% 00:09:48.028 cpu : usr=2.00%, sys=8.50%, ctx=5757, majf=0, minf=14 00:09:48.028 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:48.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.028 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.028 issued rwts: total=2681,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.028 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:48.028 00:09:48.028 Run status group 0 (all jobs): 00:09:48.028 READ: bw=44.4MiB/s (46.6MB/s), 9.99MiB/s-12.0MiB/s (10.5MB/s-12.6MB/s), io=44.5MiB (46.6MB), run=1001-1001msec 00:09:48.028 WRITE: bw=48.5MiB/s (50.8MB/s), 11.9MiB/s-12.6MiB/s (12.5MB/s-13.2MB/s), io=48.5MiB (50.9MB), run=1001-1001msec 00:09:48.028 00:09:48.028 Disk stats (read/write): 00:09:48.028 nvme0n1: ios=2610/2919, merge=0/0, ticks=469/381, in_queue=850, util=89.58% 00:09:48.028 nvme0n2: ios=2589/2792, merge=0/0, ticks=444/365, in_queue=809, util=88.36% 00:09:48.028 nvme0n3: ios=2294/2560, merge=0/0, ticks=435/382, in_queue=817, util=89.40% 00:09:48.028 nvme0n4: ios=2419/2560, merge=0/0, ticks=449/361, in_queue=810, util=89.97% 00:09:48.028 01:50:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:48.028 [global] 00:09:48.028 thread=1 00:09:48.028 invalidate=1 00:09:48.028 rw=write 00:09:48.028 time_based=1 00:09:48.028 runtime=1 00:09:48.028 ioengine=libaio 00:09:48.028 direct=1 00:09:48.028 bs=4096 00:09:48.028 iodepth=128 00:09:48.028 norandommap=0 00:09:48.028 numjobs=1 00:09:48.028 00:09:48.028 verify_dump=1 00:09:48.028 verify_backlog=512 00:09:48.028 verify_state_save=0 00:09:48.028 do_verify=1 00:09:48.028 verify=crc32c-intel 00:09:48.028 [job0] 00:09:48.028 filename=/dev/nvme0n1 00:09:48.028 [job1] 00:09:48.028 filename=/dev/nvme0n2 00:09:48.028 [job2] 00:09:48.028 filename=/dev/nvme0n3 00:09:48.028 [job3] 00:09:48.028 filename=/dev/nvme0n4 00:09:48.028 Could not set queue depth (nvme0n1) 00:09:48.028 Could not set queue depth (nvme0n2) 00:09:48.028 Could not set queue depth (nvme0n3) 00:09:48.028 Could not set queue depth (nvme0n4) 00:09:48.028 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:48.028 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:48.028 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:48.028 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:48.028 fio-3.35 00:09:48.028 Starting 4 threads 00:09:49.403 00:09:49.403 job0: (groupid=0, jobs=1): err= 0: pid=78323: Tue Nov 19 01:50:59 2024 00:09:49.403 read: IOPS=5266, BW=20.6MiB/s (21.6MB/s)(20.6MiB/1002msec) 00:09:49.403 slat (usec): min=7, max=3496, avg=89.48, stdev=347.85 00:09:49.403 clat (usec): min=1076, max=15730, avg=11716.87, stdev=1275.55 00:09:49.403 lat (usec): min=1087, max=15766, avg=11806.35, stdev=1305.12 00:09:49.403 clat percentiles (usec): 00:09:49.403 | 1.00th=[ 6718], 5.00th=[10028], 10.00th=[10683], 20.00th=[11469], 00:09:49.403 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11731], 60.00th=[11863], 00:09:49.403 | 70.00th=[11994], 80.00th=[12125], 90.00th=[13042], 95.00th=[13566], 00:09:49.403 | 99.00th=[14353], 99.50th=[14615], 99.90th=[15139], 99.95th=[15533], 00:09:49.403 | 99.99th=[15795] 00:09:49.403 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:09:49.403 slat (usec): min=11, max=5922, avg=86.31, stdev=360.79 00:09:49.403 clat (usec): min=8232, max=16893, avg=11497.10, stdev=1041.29 00:09:49.403 lat (usec): min=8268, max=16939, avg=11583.41, stdev=1088.52 00:09:49.403 clat percentiles (usec): 00:09:49.403 | 1.00th=[ 9503], 5.00th=[10421], 10.00th=[10552], 20.00th=[10814], 00:09:49.403 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11469], 00:09:49.403 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12911], 95.00th=[13960], 00:09:49.403 | 99.00th=[15008], 99.50th=[15401], 99.90th=[15926], 99.95th=[15926], 00:09:49.403 | 99.99th=[16909] 00:09:49.403 bw ( KiB/s): min=22228, max=22872, per=35.04%, avg=22550.00, stdev=455.38, samples=2 00:09:49.403 iops : min= 5557, max= 5718, avg=5637.50, stdev=113.84, samples=2 00:09:49.403 lat (msec) : 2=0.08%, 4=0.22%, 10=3.42%, 20=96.28% 00:09:49.403 cpu : usr=4.80%, sys=15.08%, ctx=540, majf=0, minf=1 00:09:49.403 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:49.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:49.403 issued rwts: total=5277,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.403 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:49.403 job1: (groupid=0, jobs=1): err= 0: pid=78324: Tue Nov 19 01:50:59 2024 00:09:49.403 read: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec) 00:09:49.403 slat (usec): min=5, max=9456, avg=203.10, stdev=885.43 00:09:49.403 clat (usec): min=16424, max=46004, avg=25543.78, stdev=5340.51 00:09:49.403 lat (usec): min=16446, max=48520, avg=25746.88, stdev=5419.20 00:09:49.403 clat percentiles (usec): 00:09:49.403 | 1.00th=[16581], 5.00th=[20317], 10.00th=[20579], 20.00th=[21103], 00:09:49.403 | 30.00th=[21365], 40.00th=[21890], 50.00th=[23987], 60.00th=[26608], 00:09:49.403 | 70.00th=[29492], 80.00th=[30540], 90.00th=[31065], 95.00th=[34341], 00:09:49.403 | 99.00th=[42730], 99.50th=[44303], 99.90th=[44827], 99.95th=[44827], 00:09:49.403 | 99.99th=[45876] 00:09:49.403 write: IOPS=2347, BW=9392KiB/s (9617kB/s)(9448KiB/1006msec); 0 zone resets 00:09:49.403 slat (usec): min=9, max=6403, avg=240.12, stdev=866.29 00:09:49.403 clat (usec): min=3145, max=63052, avg=31624.37, stdev=14082.28 00:09:49.403 lat (usec): min=6410, max=63092, avg=31864.50, stdev=14169.87 00:09:49.403 clat percentiles (usec): 00:09:49.403 | 1.00th=[11338], 5.00th=[13829], 10.00th=[14484], 20.00th=[17171], 00:09:49.403 | 30.00th=[19792], 40.00th=[21627], 50.00th=[33424], 60.00th=[37487], 00:09:49.403 | 70.00th=[39060], 80.00th=[44827], 90.00th=[50594], 95.00th=[56361], 00:09:49.403 | 99.00th=[61080], 99.50th=[62653], 99.90th=[63177], 99.95th=[63177], 00:09:49.403 | 99.99th=[63177] 00:09:49.403 bw ( KiB/s): min= 7640, max=10232, per=13.88%, avg=8936.00, stdev=1832.82, samples=2 00:09:49.403 iops : min= 1910, max= 2558, avg=2234.00, stdev=458.21, samples=2 00:09:49.403 lat (msec) : 4=0.02%, 10=0.36%, 20=18.03%, 50=75.62%, 100=5.96% 00:09:49.403 cpu : usr=1.79%, sys=7.26%, ctx=295, majf=0, minf=8 00:09:49.403 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:09:49.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:49.403 issued rwts: total=2048,2362,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.403 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:49.403 job2: (groupid=0, jobs=1): err= 0: pid=78325: Tue Nov 19 01:50:59 2024 00:09:49.403 read: IOPS=2962, BW=11.6MiB/s (12.1MB/s)(11.6MiB/1005msec) 00:09:49.403 slat (usec): min=4, max=14664, avg=177.64, stdev=962.05 00:09:49.403 clat (usec): min=1543, max=50069, avg=22717.20, stdev=7243.65 00:09:49.403 lat (usec): min=4986, max=50099, avg=22894.84, stdev=7231.59 00:09:49.403 clat percentiles (usec): 00:09:49.403 | 1.00th=[ 5604], 5.00th=[15401], 10.00th=[17433], 20.00th=[18744], 00:09:49.403 | 30.00th=[19006], 40.00th=[19268], 50.00th=[19530], 60.00th=[20841], 00:09:49.403 | 70.00th=[25560], 80.00th=[27919], 90.00th=[28967], 95.00th=[38011], 00:09:49.403 | 99.00th=[50070], 99.50th=[50070], 99.90th=[50070], 99.95th=[50070], 00:09:49.403 | 99.99th=[50070] 00:09:49.403 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:09:49.403 slat (usec): min=10, max=8772, avg=146.27, stdev=710.62 00:09:49.403 clat (usec): min=11901, max=30858, avg=18955.97, stdev=3752.75 00:09:49.403 lat (usec): min=14732, max=30890, avg=19102.23, stdev=3712.62 00:09:49.403 clat percentiles (usec): 00:09:49.403 | 1.00th=[12780], 5.00th=[14877], 10.00th=[15139], 20.00th=[15533], 00:09:49.403 | 30.00th=[15926], 40.00th=[17171], 50.00th=[19006], 60.00th=[19792], 00:09:49.403 | 70.00th=[20317], 80.00th=[20841], 90.00th=[25560], 95.00th=[26870], 00:09:49.403 | 99.00th=[30802], 99.50th=[30802], 99.90th=[30802], 99.95th=[30802], 00:09:49.403 | 99.99th=[30802] 00:09:49.403 bw ( KiB/s): min=12288, max=12312, per=19.11%, avg=12300.00, stdev=16.97, samples=2 00:09:49.403 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:09:49.403 lat (msec) : 2=0.02%, 10=0.69%, 20=62.14%, 50=36.93%, 100=0.21% 00:09:49.403 cpu : usr=2.79%, sys=8.96%, ctx=190, majf=0, minf=5 00:09:49.403 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:49.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:49.403 issued rwts: total=2977,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.403 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:49.403 job3: (groupid=0, jobs=1): err= 0: pid=78326: Tue Nov 19 01:50:59 2024 00:09:49.403 read: IOPS=4717, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1002msec) 00:09:49.403 slat (usec): min=3, max=3986, avg=99.23, stdev=394.79 00:09:49.403 clat (usec): min=1207, max=17104, avg=13027.39, stdev=1383.37 00:09:49.403 lat (usec): min=1218, max=17139, avg=13126.62, stdev=1418.09 00:09:49.403 clat percentiles (usec): 00:09:49.403 | 1.00th=[ 5276], 5.00th=[11338], 10.00th=[12256], 20.00th=[12649], 00:09:49.403 | 30.00th=[12911], 40.00th=[12911], 50.00th=[13042], 60.00th=[13173], 00:09:49.403 | 70.00th=[13304], 80.00th=[13566], 90.00th=[14353], 95.00th=[14877], 00:09:49.404 | 99.00th=[15533], 99.50th=[16057], 99.90th=[16712], 99.95th=[16712], 00:09:49.404 | 99.99th=[17171] 00:09:49.404 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:09:49.404 slat (usec): min=10, max=3877, avg=96.35, stdev=448.44 00:09:49.404 clat (usec): min=8279, max=16996, avg=12689.88, stdev=917.64 00:09:49.404 lat (usec): min=8295, max=17048, avg=12786.23, stdev=1004.76 00:09:49.404 clat percentiles (usec): 00:09:49.404 | 1.00th=[10421], 5.00th=[11731], 10.00th=[11863], 20.00th=[12125], 00:09:49.404 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:09:49.404 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13566], 95.00th=[14615], 00:09:49.404 | 99.00th=[15926], 99.50th=[16188], 99.90th=[16712], 99.95th=[16712], 00:09:49.404 | 99.99th=[16909] 00:09:49.404 bw ( KiB/s): min=20480, max=20480, per=31.82%, avg=20480.00, stdev= 0.00, samples=1 00:09:49.404 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:09:49.404 lat (msec) : 2=0.10%, 4=0.25%, 10=0.86%, 20=98.78% 00:09:49.404 cpu : usr=4.20%, sys=13.89%, ctx=399, majf=0, minf=3 00:09:49.404 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:49.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.404 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:49.404 issued rwts: total=4727,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.404 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:49.404 00:09:49.404 Run status group 0 (all jobs): 00:09:49.404 READ: bw=58.4MiB/s (61.2MB/s), 8143KiB/s-20.6MiB/s (8339kB/s-21.6MB/s), io=58.7MiB (61.6MB), run=1002-1006msec 00:09:49.404 WRITE: bw=62.8MiB/s (65.9MB/s), 9392KiB/s-22.0MiB/s (9617kB/s-23.0MB/s), io=63.2MiB (66.3MB), run=1002-1006msec 00:09:49.404 00:09:49.404 Disk stats (read/write): 00:09:49.404 nvme0n1: ios=4658/4738, merge=0/0, ticks=17409/15296, in_queue=32705, util=88.38% 00:09:49.404 nvme0n2: ios=1872/2048, merge=0/0, ticks=15177/19743, in_queue=34920, util=88.26% 00:09:49.404 nvme0n3: ios=2513/2560, merge=0/0, ticks=14133/11007, in_queue=25140, util=89.25% 00:09:49.404 nvme0n4: ios=4096/4354, merge=0/0, ticks=17144/15630, in_queue=32774, util=89.81% 00:09:49.404 01:50:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:49.404 [global] 00:09:49.404 thread=1 00:09:49.404 invalidate=1 00:09:49.404 rw=randwrite 00:09:49.404 time_based=1 00:09:49.404 runtime=1 00:09:49.404 ioengine=libaio 00:09:49.404 direct=1 00:09:49.404 bs=4096 00:09:49.404 iodepth=128 00:09:49.404 norandommap=0 00:09:49.404 numjobs=1 00:09:49.404 00:09:49.404 verify_dump=1 00:09:49.404 verify_backlog=512 00:09:49.404 verify_state_save=0 00:09:49.404 do_verify=1 00:09:49.404 verify=crc32c-intel 00:09:49.404 [job0] 00:09:49.404 filename=/dev/nvme0n1 00:09:49.404 [job1] 00:09:49.404 filename=/dev/nvme0n2 00:09:49.404 [job2] 00:09:49.404 filename=/dev/nvme0n3 00:09:49.404 [job3] 00:09:49.404 filename=/dev/nvme0n4 00:09:49.404 Could not set queue depth (nvme0n1) 00:09:49.404 Could not set queue depth (nvme0n2) 00:09:49.404 Could not set queue depth (nvme0n3) 00:09:49.404 Could not set queue depth (nvme0n4) 00:09:49.404 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:49.404 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:49.404 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:49.404 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:49.404 fio-3.35 00:09:49.404 Starting 4 threads 00:09:50.833 00:09:50.833 job0: (groupid=0, jobs=1): err= 0: pid=78381: Tue Nov 19 01:51:01 2024 00:09:50.833 read: IOPS=1804, BW=7218KiB/s (7392kB/s)(7240KiB/1003msec) 00:09:50.833 slat (usec): min=7, max=13940, avg=230.41, stdev=1090.90 00:09:50.833 clat (usec): min=1315, max=61544, avg=26476.98, stdev=9594.70 00:09:50.833 lat (usec): min=3053, max=61559, avg=26707.39, stdev=9680.42 00:09:50.833 clat percentiles (usec): 00:09:50.833 | 1.00th=[ 3392], 5.00th=[15926], 10.00th=[17433], 20.00th=[21627], 00:09:50.833 | 30.00th=[22152], 40.00th=[22676], 50.00th=[22676], 60.00th=[23200], 00:09:50.833 | 70.00th=[25560], 80.00th=[36963], 90.00th=[41157], 95.00th=[44303], 00:09:50.833 | 99.00th=[52691], 99.50th=[56361], 99.90th=[61604], 99.95th=[61604], 00:09:50.833 | 99.99th=[61604] 00:09:50.833 write: IOPS=2041, BW=8167KiB/s (8364kB/s)(8192KiB/1003msec); 0 zone resets 00:09:50.833 slat (usec): min=13, max=12010, avg=276.05, stdev=1051.30 00:09:50.833 clat (usec): min=10952, max=74553, avg=38284.42, stdev=16697.69 00:09:50.833 lat (usec): min=10976, max=74577, avg=38560.47, stdev=16784.59 00:09:50.833 clat percentiles (usec): 00:09:50.833 | 1.00th=[11600], 5.00th=[18220], 10.00th=[21103], 20.00th=[23200], 00:09:50.833 | 30.00th=[25297], 40.00th=[27132], 50.00th=[35914], 60.00th=[40109], 00:09:50.833 | 70.00th=[44303], 80.00th=[54789], 90.00th=[65799], 95.00th=[68682], 00:09:50.833 | 99.00th=[73925], 99.50th=[73925], 99.90th=[74974], 99.95th=[74974], 00:09:50.833 | 99.99th=[74974] 00:09:50.833 bw ( KiB/s): min= 6832, max= 9552, per=12.54%, avg=8192.00, stdev=1923.33, samples=2 00:09:50.833 iops : min= 1708, max= 2388, avg=2048.00, stdev=480.83, samples=2 00:09:50.833 lat (msec) : 2=0.03%, 4=0.57%, 10=0.52%, 20=9.10%, 50=75.64% 00:09:50.833 lat (msec) : 100=14.15% 00:09:50.833 cpu : usr=1.90%, sys=6.79%, ctx=303, majf=0, minf=7 00:09:50.833 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:09:50.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:50.833 issued rwts: total=1810,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.833 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:50.833 job1: (groupid=0, jobs=1): err= 0: pid=78382: Tue Nov 19 01:51:01 2024 00:09:50.833 read: IOPS=3699, BW=14.5MiB/s (15.2MB/s)(14.5MiB/1003msec) 00:09:50.833 slat (usec): min=6, max=19606, avg=137.53, stdev=866.22 00:09:50.833 clat (usec): min=746, max=37428, avg=18617.08, stdev=4472.51 00:09:50.833 lat (usec): min=4564, max=37464, avg=18754.61, stdev=4520.15 00:09:50.833 clat percentiles (usec): 00:09:50.833 | 1.00th=[10552], 5.00th=[13960], 10.00th=[14877], 20.00th=[15270], 00:09:50.833 | 30.00th=[15533], 40.00th=[15795], 50.00th=[16450], 60.00th=[19530], 00:09:50.833 | 70.00th=[21627], 80.00th=[22676], 90.00th=[23200], 95.00th=[28181], 00:09:50.833 | 99.00th=[31065], 99.50th=[32900], 99.90th=[35914], 99.95th=[35914], 00:09:50.833 | 99.99th=[37487] 00:09:50.833 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:09:50.833 slat (usec): min=5, max=13261, avg=112.25, stdev=628.54 00:09:50.833 clat (usec): min=6215, max=35607, avg=14157.77, stdev=3836.50 00:09:50.833 lat (usec): min=6326, max=35618, avg=14270.02, stdev=3816.14 00:09:50.833 clat percentiles (usec): 00:09:50.833 | 1.00th=[ 7898], 5.00th=[10552], 10.00th=[10814], 20.00th=[11600], 00:09:50.833 | 30.00th=[11731], 40.00th=[12387], 50.00th=[12911], 60.00th=[13435], 00:09:50.833 | 70.00th=[14222], 80.00th=[16909], 90.00th=[21890], 95.00th=[23200], 00:09:50.833 | 99.00th=[23987], 99.50th=[25035], 99.90th=[26084], 99.95th=[29230], 00:09:50.833 | 99.99th=[35390] 00:09:50.833 bw ( KiB/s): min=14861, max=17928, per=25.09%, avg=16394.50, stdev=2168.70, samples=2 00:09:50.833 iops : min= 3715, max= 4482, avg=4098.50, stdev=542.35, samples=2 00:09:50.833 lat (usec) : 750=0.01% 00:09:50.833 lat (msec) : 10=1.54%, 20=73.72%, 50=24.73% 00:09:50.833 cpu : usr=2.89%, sys=11.78%, ctx=227, majf=0, minf=3 00:09:50.833 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:50.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:50.833 issued rwts: total=3711,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.833 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:50.833 job2: (groupid=0, jobs=1): err= 0: pid=78383: Tue Nov 19 01:51:01 2024 00:09:50.833 read: IOPS=4732, BW=18.5MiB/s (19.4MB/s)(18.5MiB/1002msec) 00:09:50.833 slat (usec): min=4, max=5196, avg=102.06, stdev=464.41 00:09:50.833 clat (usec): min=705, max=18773, avg=13271.42, stdev=1514.97 00:09:50.833 lat (usec): min=1747, max=18781, avg=13373.48, stdev=1522.75 00:09:50.833 clat percentiles (usec): 00:09:50.833 | 1.00th=[ 6521], 5.00th=[10945], 10.00th=[11731], 20.00th=[12780], 00:09:50.833 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13435], 60.00th=[13566], 00:09:50.833 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14484], 95.00th=[15533], 00:09:50.833 | 99.00th=[17171], 99.50th=[17695], 99.90th=[18744], 99.95th=[18744], 00:09:50.833 | 99.99th=[18744] 00:09:50.833 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:09:50.833 slat (usec): min=12, max=5199, avg=93.39, stdev=526.10 00:09:50.833 clat (usec): min=6793, max=18263, avg=12446.28, stdev=1276.11 00:09:50.833 lat (usec): min=6840, max=18319, avg=12539.67, stdev=1365.22 00:09:50.833 clat percentiles (usec): 00:09:50.833 | 1.00th=[ 8586], 5.00th=[10421], 10.00th=[11338], 20.00th=[11731], 00:09:50.833 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12518], 60.00th=[12780], 00:09:50.833 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13304], 95.00th=[14091], 00:09:50.833 | 99.00th=[16909], 99.50th=[17433], 99.90th=[17957], 99.95th=[18220], 00:09:50.833 | 99.99th=[18220] 00:09:50.833 bw ( KiB/s): min=20480, max=20521, per=31.38%, avg=20500.50, stdev=28.99, samples=2 00:09:50.833 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:09:50.833 lat (usec) : 750=0.01% 00:09:50.833 lat (msec) : 2=0.07%, 10=3.08%, 20=96.84% 00:09:50.833 cpu : usr=4.40%, sys=13.49%, ctx=351, majf=0, minf=3 00:09:50.833 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:50.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:50.834 issued rwts: total=4742,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.834 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:50.834 job3: (groupid=0, jobs=1): err= 0: pid=78384: Tue Nov 19 01:51:01 2024 00:09:50.834 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:09:50.834 slat (usec): min=3, max=11837, avg=100.95, stdev=655.48 00:09:50.834 clat (usec): min=3227, max=24856, avg=13824.80, stdev=2092.52 00:09:50.834 lat (usec): min=3236, max=25415, avg=13925.75, stdev=2117.71 00:09:50.834 clat percentiles (usec): 00:09:50.834 | 1.00th=[ 8455], 5.00th=[11863], 10.00th=[12780], 20.00th=[13173], 00:09:50.834 | 30.00th=[13304], 40.00th=[13435], 50.00th=[13566], 60.00th=[13698], 00:09:50.834 | 70.00th=[13960], 80.00th=[14222], 90.00th=[14484], 95.00th=[18220], 00:09:50.834 | 99.00th=[22676], 99.50th=[23462], 99.90th=[24773], 99.95th=[24773], 00:09:50.834 | 99.99th=[24773] 00:09:50.834 write: IOPS=5112, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:09:50.834 slat (usec): min=5, max=8873, avg=97.52, stdev=568.03 00:09:50.834 clat (usec): min=617, max=24834, avg=12339.78, stdev=1713.07 00:09:50.834 lat (usec): min=2992, max=24841, avg=12437.30, stdev=1642.36 00:09:50.834 clat percentiles (usec): 00:09:50.834 | 1.00th=[ 5407], 5.00th=[ 9503], 10.00th=[11076], 20.00th=[11600], 00:09:50.834 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12518], 60.00th=[12780], 00:09:50.834 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13698], 95.00th=[13960], 00:09:50.834 | 99.00th=[16909], 99.50th=[17433], 99.90th=[17695], 99.95th=[17695], 00:09:50.834 | 99.99th=[24773] 00:09:50.834 bw ( KiB/s): min=19448, max=20521, per=30.59%, avg=19984.50, stdev=758.73, samples=2 00:09:50.834 iops : min= 4862, max= 5130, avg=4996.00, stdev=189.50, samples=2 00:09:50.834 lat (usec) : 750=0.01% 00:09:50.834 lat (msec) : 4=0.26%, 10=4.39%, 20=93.69%, 50=1.66% 00:09:50.834 cpu : usr=3.70%, sys=13.20%, ctx=250, majf=0, minf=3 00:09:50.834 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:50.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:50.834 issued rwts: total=4608,5118,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.834 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:50.834 00:09:50.834 Run status group 0 (all jobs): 00:09:50.834 READ: bw=57.9MiB/s (60.7MB/s), 7218KiB/s-18.5MiB/s (7392kB/s-19.4MB/s), io=58.1MiB (60.9MB), run=1001-1003msec 00:09:50.834 WRITE: bw=63.8MiB/s (66.9MB/s), 8167KiB/s-20.0MiB/s (8364kB/s-20.9MB/s), io=64.0MiB (67.1MB), run=1001-1003msec 00:09:50.834 00:09:50.834 Disk stats (read/write): 00:09:50.834 nvme0n1: ios=1586/1863, merge=0/0, ticks=13036/21659, in_queue=34695, util=88.18% 00:09:50.834 nvme0n2: ios=3121/3455, merge=0/0, ticks=56741/46643, in_queue=103384, util=88.69% 00:09:50.834 nvme0n3: ios=4102/4436, merge=0/0, ticks=26478/22937, in_queue=49415, util=89.40% 00:09:50.834 nvme0n4: ios=4096/4223, merge=0/0, ticks=53162/48700, in_queue=101862, util=89.44% 00:09:50.834 01:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:50.834 01:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=78397 00:09:50.834 01:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:50.834 01:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:50.834 [global] 00:09:50.834 thread=1 00:09:50.834 invalidate=1 00:09:50.834 rw=read 00:09:50.834 time_based=1 00:09:50.834 runtime=10 00:09:50.834 ioengine=libaio 00:09:50.834 direct=1 00:09:50.834 bs=4096 00:09:50.834 iodepth=1 00:09:50.834 norandommap=1 00:09:50.834 numjobs=1 00:09:50.834 00:09:50.834 [job0] 00:09:50.834 filename=/dev/nvme0n1 00:09:50.834 [job1] 00:09:50.834 filename=/dev/nvme0n2 00:09:50.834 [job2] 00:09:50.834 filename=/dev/nvme0n3 00:09:50.834 [job3] 00:09:50.834 filename=/dev/nvme0n4 00:09:50.834 Could not set queue depth (nvme0n1) 00:09:50.834 Could not set queue depth (nvme0n2) 00:09:50.834 Could not set queue depth (nvme0n3) 00:09:50.834 Could not set queue depth (nvme0n4) 00:09:50.834 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:50.834 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:50.834 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:50.834 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:50.834 fio-3.35 00:09:50.834 Starting 4 threads 00:09:54.120 01:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:54.120 fio: pid=78450, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:54.120 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=45445120, buflen=4096 00:09:54.120 01:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:54.120 fio: pid=78449, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:54.120 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=70819840, buflen=4096 00:09:54.120 01:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:54.120 01:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:54.378 fio: pid=78447, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:54.378 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=56418304, buflen=4096 00:09:54.378 01:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:54.378 01:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:54.638 fio: pid=78448, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:54.638 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=17973248, buflen=4096 00:09:54.638 00:09:54.638 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=78447: Tue Nov 19 01:51:05 2024 00:09:54.638 read: IOPS=3991, BW=15.6MiB/s (16.3MB/s)(53.8MiB/3451msec) 00:09:54.638 slat (usec): min=8, max=9783, avg=14.75, stdev=140.91 00:09:54.638 clat (usec): min=125, max=3431, avg=234.51, stdev=54.24 00:09:54.638 lat (usec): min=137, max=10013, avg=249.27, stdev=150.47 00:09:54.638 clat percentiles (usec): 00:09:54.638 | 1.00th=[ 137], 5.00th=[ 149], 10.00th=[ 159], 20.00th=[ 223], 00:09:54.638 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 251], 00:09:54.638 | 70.00th=[ 255], 80.00th=[ 262], 90.00th=[ 269], 95.00th=[ 277], 00:09:54.638 | 99.00th=[ 293], 99.50th=[ 297], 99.90th=[ 570], 99.95th=[ 824], 00:09:54.638 | 99.99th=[ 1598] 00:09:54.638 bw ( KiB/s): min=14744, max=15240, per=22.44%, avg=15113.50, stdev=187.29, samples=6 00:09:54.638 iops : min= 3686, max= 3810, avg=3778.33, stdev=46.81, samples=6 00:09:54.638 lat (usec) : 250=59.09%, 500=40.77%, 750=0.08%, 1000=0.02% 00:09:54.638 lat (msec) : 2=0.03%, 4=0.01% 00:09:54.638 cpu : usr=1.33%, sys=4.64%, ctx=13780, majf=0, minf=1 00:09:54.638 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.638 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.638 issued rwts: total=13775,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.638 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.638 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=78448: Tue Nov 19 01:51:05 2024 00:09:54.638 read: IOPS=5557, BW=21.7MiB/s (22.8MB/s)(81.1MiB/3738msec) 00:09:54.638 slat (usec): min=10, max=11836, avg=16.25, stdev=169.58 00:09:54.638 clat (usec): min=126, max=6213, avg=162.33, stdev=60.33 00:09:54.638 lat (usec): min=138, max=12024, avg=178.58, stdev=180.66 00:09:54.638 clat percentiles (usec): 00:09:54.638 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 151], 00:09:54.638 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 161], 00:09:54.638 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 182], 00:09:54.638 | 99.00th=[ 202], 99.50th=[ 269], 99.90th=[ 717], 99.95th=[ 1090], 00:09:54.638 | 99.99th=[ 2057] 00:09:54.638 bw ( KiB/s): min=20807, max=22848, per=33.04%, avg=22250.43, stdev=745.16, samples=7 00:09:54.638 iops : min= 5201, max= 5712, avg=5562.43, stdev=186.49, samples=7 00:09:54.638 lat (usec) : 250=99.39%, 500=0.45%, 750=0.06%, 1000=0.04% 00:09:54.638 lat (msec) : 2=0.04%, 4=0.01%, 10=0.01% 00:09:54.638 cpu : usr=1.66%, sys=6.37%, ctx=20787, majf=0, minf=1 00:09:54.638 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.638 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.638 issued rwts: total=20773,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.638 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.638 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=78449: Tue Nov 19 01:51:05 2024 00:09:54.638 read: IOPS=5406, BW=21.1MiB/s (22.1MB/s)(67.5MiB/3198msec) 00:09:54.638 slat (usec): min=11, max=7777, avg=14.04, stdev=82.13 00:09:54.638 clat (usec): min=139, max=2088, avg=169.50, stdev=27.86 00:09:54.638 lat (usec): min=150, max=7955, avg=183.53, stdev=86.91 00:09:54.638 clat percentiles (usec): 00:09:54.638 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:09:54.638 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:09:54.638 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 186], 95.00th=[ 194], 00:09:54.638 | 99.00th=[ 206], 99.50th=[ 212], 99.90th=[ 334], 99.95th=[ 523], 00:09:54.638 | 99.99th=[ 1876] 00:09:54.638 bw ( KiB/s): min=21000, max=22216, per=32.14%, avg=21647.33, stdev=416.30, samples=6 00:09:54.638 iops : min= 5250, max= 5554, avg=5411.83, stdev=104.07, samples=6 00:09:54.638 lat (usec) : 250=99.88%, 500=0.06%, 750=0.02%, 1000=0.01% 00:09:54.638 lat (msec) : 2=0.01%, 4=0.01% 00:09:54.638 cpu : usr=1.66%, sys=6.47%, ctx=17293, majf=0, minf=2 00:09:54.638 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.638 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.638 issued rwts: total=17291,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.638 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.638 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=78450: Tue Nov 19 01:51:05 2024 00:09:54.638 read: IOPS=3770, BW=14.7MiB/s (15.4MB/s)(43.3MiB/2943msec) 00:09:54.638 slat (nsec): min=8173, max=63728, avg=11691.78, stdev=2784.83 00:09:54.638 clat (usec): min=210, max=1651, avg=252.37, stdev=22.99 00:09:54.638 lat (usec): min=220, max=1666, avg=264.07, stdev=23.04 00:09:54.638 clat percentiles (usec): 00:09:54.638 | 1.00th=[ 223], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 239], 00:09:54.638 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 255], 00:09:54.638 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 281], 00:09:54.638 | 99.00th=[ 293], 99.50th=[ 302], 99.90th=[ 334], 99.95th=[ 506], 00:09:54.638 | 99.99th=[ 1029] 00:09:54.638 bw ( KiB/s): min=14744, max=15248, per=22.41%, avg=15093.00, stdev=201.97, samples=5 00:09:54.638 iops : min= 3686, max= 3812, avg=3773.20, stdev=50.47, samples=5 00:09:54.638 lat (usec) : 250=47.92%, 500=52.02%, 750=0.03%, 1000=0.01% 00:09:54.638 lat (msec) : 2=0.02% 00:09:54.638 cpu : usr=1.19%, sys=4.11%, ctx=11096, majf=0, minf=2 00:09:54.638 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.638 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.638 issued rwts: total=11096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.638 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.638 00:09:54.638 Run status group 0 (all jobs): 00:09:54.638 READ: bw=65.8MiB/s (69.0MB/s), 14.7MiB/s-21.7MiB/s (15.4MB/s-22.8MB/s), io=246MiB (258MB), run=2943-3738msec 00:09:54.638 00:09:54.638 Disk stats (read/write): 00:09:54.638 nvme0n1: ios=13307/0, merge=0/0, ticks=3068/0, in_queue=3068, util=95.57% 00:09:54.638 nvme0n2: ios=20068/0, merge=0/0, ticks=3309/0, in_queue=3309, util=95.37% 00:09:54.639 nvme0n3: ios=16859/0, merge=0/0, ticks=2894/0, in_queue=2894, util=96.40% 00:09:54.639 nvme0n4: ios=10817/0, merge=0/0, ticks=2611/0, in_queue=2611, util=96.79% 00:09:54.639 01:51:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:54.639 01:51:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:54.897 01:51:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:54.897 01:51:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:55.156 01:51:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:55.156 01:51:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:55.416 01:51:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:55.416 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:55.676 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:55.676 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:55.935 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:55.935 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 78397 00:09:55.935 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:55.935 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:55.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.935 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:55.935 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:55.935 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:55.935 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:55.935 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:55.935 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:56.194 nvmf hotplug test: fio failed as expected 00:09:56.194 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:56.194 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:56.194 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:56.195 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:56.454 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:56.454 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:56.454 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:56.454 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:56.454 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:56.454 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:56.454 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:56.454 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:56.454 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:56.454 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:56.454 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:56.454 rmmod nvme_tcp 00:09:56.454 rmmod nvme_fabrics 00:09:56.454 rmmod nvme_keyring 00:09:56.454 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:56.454 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:56.454 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:56.454 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 78023 ']' 00:09:56.454 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 78023 00:09:56.454 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 78023 ']' 00:09:56.454 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 78023 00:09:56.454 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:56.454 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.454 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78023 00:09:56.454 killing process with pid 78023 00:09:56.454 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:56.454 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:56.454 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78023' 00:09:56.454 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 78023 00:09:56.454 01:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 78023 00:09:56.713 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:56.713 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:56.713 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:56.713 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:56.713 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:56.713 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:56.713 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:56.713 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:56.713 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:56.713 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:56.713 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:56.714 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:56.714 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:56.714 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:56.714 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:56.714 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:56.714 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:56.714 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:56.714 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:56.714 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:56.714 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:56.714 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:56.714 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:56.714 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.714 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:56.714 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.973 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:09:56.973 ************************************ 00:09:56.973 END TEST nvmf_fio_target 00:09:56.973 ************************************ 00:09:56.973 00:09:56.973 real 0m19.235s 00:09:56.973 user 1m11.458s 00:09:56.973 sys 0m10.589s 00:09:56.973 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.973 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.973 01:51:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:56.973 01:51:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:56.973 01:51:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.973 01:51:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:56.973 ************************************ 00:09:56.973 START TEST nvmf_bdevio 00:09:56.973 ************************************ 00:09:56.973 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:56.973 * Looking for test storage... 00:09:56.973 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:56.973 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:56.973 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:09:56.973 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:57.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.233 --rc genhtml_branch_coverage=1 00:09:57.233 --rc genhtml_function_coverage=1 00:09:57.233 --rc genhtml_legend=1 00:09:57.233 --rc geninfo_all_blocks=1 00:09:57.233 --rc geninfo_unexecuted_blocks=1 00:09:57.233 00:09:57.233 ' 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:57.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.233 --rc genhtml_branch_coverage=1 00:09:57.233 --rc genhtml_function_coverage=1 00:09:57.233 --rc genhtml_legend=1 00:09:57.233 --rc geninfo_all_blocks=1 00:09:57.233 --rc geninfo_unexecuted_blocks=1 00:09:57.233 00:09:57.233 ' 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:57.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.233 --rc genhtml_branch_coverage=1 00:09:57.233 --rc genhtml_function_coverage=1 00:09:57.233 --rc genhtml_legend=1 00:09:57.233 --rc geninfo_all_blocks=1 00:09:57.233 --rc geninfo_unexecuted_blocks=1 00:09:57.233 00:09:57.233 ' 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:57.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.233 --rc genhtml_branch_coverage=1 00:09:57.233 --rc genhtml_function_coverage=1 00:09:57.233 --rc genhtml_legend=1 00:09:57.233 --rc geninfo_all_blocks=1 00:09:57.233 --rc geninfo_unexecuted_blocks=1 00:09:57.233 00:09:57.233 ' 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.233 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:57.234 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:57.234 Cannot find device "nvmf_init_br" 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:57.234 Cannot find device "nvmf_init_br2" 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:57.234 Cannot find device "nvmf_tgt_br" 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:57.234 Cannot find device "nvmf_tgt_br2" 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:57.234 Cannot find device "nvmf_init_br" 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:57.234 Cannot find device "nvmf_init_br2" 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:57.234 Cannot find device "nvmf_tgt_br" 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:57.234 Cannot find device "nvmf_tgt_br2" 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:57.234 Cannot find device "nvmf_br" 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:57.234 Cannot find device "nvmf_init_if" 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:57.234 Cannot find device "nvmf_init_if2" 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:57.234 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:57.234 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:57.234 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:09:57.235 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:57.235 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:57.235 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:57.235 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:57.235 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:57.494 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:57.494 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:57.494 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:57.494 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:57.494 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:57.494 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:57.494 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:57.494 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:57.494 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:57.494 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:57.494 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:57.494 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:57.494 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:57.494 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:57.494 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:57.494 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:57.494 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:57.494 01:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:57.494 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:57.494 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:57.494 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:57.494 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:57.494 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:57.494 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:57.494 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:57.494 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:57.494 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:57.494 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:57.494 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:57.494 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:09:57.494 00:09:57.494 --- 10.0.0.3 ping statistics --- 00:09:57.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.494 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:09:57.494 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:57.494 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:57.494 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:09:57.494 00:09:57.494 --- 10.0.0.4 ping statistics --- 00:09:57.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.494 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:09:57.494 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:57.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:57.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:09:57.494 00:09:57.494 --- 10.0.0.1 ping statistics --- 00:09:57.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.494 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:09:57.494 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:57.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:57.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:09:57.494 00:09:57.494 --- 10.0.0.2 ping statistics --- 00:09:57.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.494 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:09:57.494 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:57.494 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:09:57.494 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:57.494 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:57.494 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:57.494 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:57.494 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:57.494 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:57.494 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:57.494 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:57.494 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:57.494 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:57.495 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:57.495 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=78766 00:09:57.495 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 78766 00:09:57.495 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:57.495 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 78766 ']' 00:09:57.495 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.495 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:57.495 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.495 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:57.495 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:57.753 [2024-11-19 01:51:08.156532] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:09:57.753 [2024-11-19 01:51:08.156628] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.753 [2024-11-19 01:51:08.310569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:57.753 [2024-11-19 01:51:08.336404] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:57.753 [2024-11-19 01:51:08.336957] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:57.753 [2024-11-19 01:51:08.337486] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:57.753 [2024-11-19 01:51:08.338141] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:57.753 [2024-11-19 01:51:08.338166] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:57.753 [2024-11-19 01:51:08.339109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:57.753 [2024-11-19 01:51:08.339626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:57.753 [2024-11-19 01:51:08.339763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:57.753 [2024-11-19 01:51:08.339772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:58.012 [2024-11-19 01:51:08.373604] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:58.012 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.012 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:58.012 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:58.012 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:58.012 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:58.012 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:58.012 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:58.012 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.012 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:58.012 [2024-11-19 01:51:08.469071] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:58.012 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.012 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:58.012 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.012 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:58.012 Malloc0 00:09:58.012 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.012 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:58.012 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.012 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:58.012 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.012 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:58.012 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.012 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:58.012 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.012 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:58.012 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.012 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:58.012 [2024-11-19 01:51:08.539639] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:58.012 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.012 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:58.012 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:58.012 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:58.012 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:58.012 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:58.012 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:58.012 { 00:09:58.012 "params": { 00:09:58.012 "name": "Nvme$subsystem", 00:09:58.012 "trtype": "$TEST_TRANSPORT", 00:09:58.013 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:58.013 "adrfam": "ipv4", 00:09:58.013 "trsvcid": "$NVMF_PORT", 00:09:58.013 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:58.013 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:58.013 "hdgst": ${hdgst:-false}, 00:09:58.013 "ddgst": ${ddgst:-false} 00:09:58.013 }, 00:09:58.013 "method": "bdev_nvme_attach_controller" 00:09:58.013 } 00:09:58.013 EOF 00:09:58.013 )") 00:09:58.013 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:58.013 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:58.013 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:58.013 01:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:58.013 "params": { 00:09:58.013 "name": "Nvme1", 00:09:58.013 "trtype": "tcp", 00:09:58.013 "traddr": "10.0.0.3", 00:09:58.013 "adrfam": "ipv4", 00:09:58.013 "trsvcid": "4420", 00:09:58.013 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:58.013 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:58.013 "hdgst": false, 00:09:58.013 "ddgst": false 00:09:58.013 }, 00:09:58.013 "method": "bdev_nvme_attach_controller" 00:09:58.013 }' 00:09:58.013 [2024-11-19 01:51:08.603843] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:09:58.013 [2024-11-19 01:51:08.604251] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78795 ] 00:09:58.270 [2024-11-19 01:51:08.755218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:58.270 [2024-11-19 01:51:08.779068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.270 [2024-11-19 01:51:08.779200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:58.270 [2024-11-19 01:51:08.779205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.270 [2024-11-19 01:51:08.817061] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:58.529 I/O targets: 00:09:58.529 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:58.529 00:09:58.529 00:09:58.529 CUnit - A unit testing framework for C - Version 2.1-3 00:09:58.529 http://cunit.sourceforge.net/ 00:09:58.529 00:09:58.529 00:09:58.529 Suite: bdevio tests on: Nvme1n1 00:09:58.529 Test: blockdev write read block ...passed 00:09:58.529 Test: blockdev write zeroes read block ...passed 00:09:58.529 Test: blockdev write zeroes read no split ...passed 00:09:58.529 Test: blockdev write zeroes read split ...passed 00:09:58.529 Test: blockdev write zeroes read split partial ...passed 00:09:58.529 Test: blockdev reset ...[2024-11-19 01:51:08.944296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:58.529 [2024-11-19 01:51:08.944405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x590d50 (9): Bad file descriptor 00:09:58.529 [2024-11-19 01:51:08.961810] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resettinpassed 00:09:58.529 Test: blockdev write read 8 blocks ...g controller successful. 00:09:58.529 passed 00:09:58.529 Test: blockdev write read size > 128k ...passed 00:09:58.529 Test: blockdev write read invalid size ...passed 00:09:58.529 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:58.529 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:58.529 Test: blockdev write read max offset ...passed 00:09:58.529 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:58.529 Test: blockdev writev readv 8 blocks ...passed 00:09:58.529 Test: blockdev writev readv 30 x 1block ...passed 00:09:58.529 Test: blockdev writev readv block ...passed 00:09:58.529 Test: blockdev writev readv size > 128k ...passed 00:09:58.529 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:58.529 Test: blockdev comparev and writev ...[2024-11-19 01:51:08.970833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.529 [2024-11-19 01:51:08.971036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:58.529 [2024-11-19 01:51:08.971071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.529 [2024-11-19 01:51:08.971085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:58.529 [2024-11-19 01:51:08.971428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.529 [2024-11-19 01:51:08.971449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:58.529 [2024-11-19 01:51:08.971470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.529 [2024-11-19 01:51:08.971483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:58.529 [2024-11-19 01:51:08.971815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.529 [2024-11-19 01:51:08.971837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:58.529 [2024-11-19 01:51:08.971857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.529 [2024-11-19 01:51:08.971870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:58.529 [2024-11-19 01:51:08.972255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.529 [2024-11-19 01:51:08.972292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:58.529 [2024-11-19 01:51:08.972314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.529 [2024-11-19 01:51:08.972326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:58.529 passed 00:09:58.529 Test: blockdev nvme passthru rw ...passed 00:09:58.529 Test: blockdev nvme passthru vendor specific ...[2024-11-19 01:51:08.973282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:58.529 [2024-11-19 01:51:08.973315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:58.529 passed 00:09:58.529 Test: blockdev nvme admin passthru ...[2024-11-19 01:51:08.973473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:58.529 [2024-11-19 01:51:08.973514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:58.529 [2024-11-19 01:51:08.973649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:58.529 [2024-11-19 01:51:08.973682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:58.529 [2024-11-19 01:51:08.973805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:58.529 [2024-11-19 01:51:08.973824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:58.529 passed 00:09:58.529 Test: blockdev copy ...passed 00:09:58.529 00:09:58.529 Run Summary: Type Total Ran Passed Failed Inactive 00:09:58.529 suites 1 1 n/a 0 0 00:09:58.529 tests 23 23 23 0 0 00:09:58.529 asserts 152 152 152 0 n/a 00:09:58.529 00:09:58.529 Elapsed time = 0.144 seconds 00:09:58.529 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:58.529 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.529 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:58.529 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.529 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:58.529 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:58.529 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:58.529 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:58.789 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:58.789 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:58.789 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:58.789 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:58.789 rmmod nvme_tcp 00:09:58.789 rmmod nvme_fabrics 00:09:58.789 rmmod nvme_keyring 00:09:58.789 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:58.789 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:58.789 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:58.789 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 78766 ']' 00:09:58.789 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 78766 00:09:58.789 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 78766 ']' 00:09:58.789 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 78766 00:09:58.789 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:58.789 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.789 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78766 00:09:58.789 killing process with pid 78766 00:09:58.789 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:58.789 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:58.789 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78766' 00:09:58.789 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 78766 00:09:58.789 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 78766 00:09:58.789 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:58.789 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:58.789 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:58.789 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:58.789 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:58.789 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:58.789 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:58.789 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:58.789 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:58.789 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:59.049 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:59.049 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:59.049 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:59.049 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:59.049 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:59.049 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:59.049 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:59.049 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:59.049 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:59.049 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:59.049 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:59.049 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:59.049 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:59.049 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.049 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.049 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.049 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:09:59.049 00:09:59.049 real 0m2.231s 00:09:59.049 user 0m5.327s 00:09:59.049 sys 0m0.773s 00:09:59.049 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.049 ************************************ 00:09:59.049 END TEST nvmf_bdevio 00:09:59.049 01:51:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:59.049 ************************************ 00:09:59.309 01:51:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:59.309 ************************************ 00:09:59.309 END TEST nvmf_target_core 00:09:59.309 ************************************ 00:09:59.309 00:09:59.309 real 2m29.235s 00:09:59.309 user 6m27.404s 00:09:59.309 sys 0m53.398s 00:09:59.309 01:51:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.309 01:51:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:59.309 01:51:09 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:59.309 01:51:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:59.309 01:51:09 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.309 01:51:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:59.309 ************************************ 00:09:59.309 START TEST nvmf_target_extra 00:09:59.309 ************************************ 00:09:59.309 01:51:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:59.309 * Looking for test storage... 00:09:59.309 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:59.309 01:51:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:59.309 01:51:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:09:59.309 01:51:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:59.309 01:51:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:59.309 01:51:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:59.309 01:51:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:59.309 01:51:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:59.309 01:51:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:59.309 01:51:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:59.309 01:51:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:59.309 01:51:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:59.309 01:51:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:59.309 01:51:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:59.309 01:51:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:59.309 01:51:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:59.309 01:51:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:59.309 01:51:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:59.309 01:51:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:59.309 01:51:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:59.569 01:51:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:59.569 01:51:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:59.569 01:51:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:59.569 01:51:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:59.569 01:51:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:59.569 01:51:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:59.569 01:51:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:59.569 01:51:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:59.569 01:51:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:59.569 01:51:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:59.569 01:51:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:59.569 01:51:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:59.569 01:51:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:59.569 01:51:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:59.569 01:51:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:59.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.569 --rc genhtml_branch_coverage=1 00:09:59.569 --rc genhtml_function_coverage=1 00:09:59.569 --rc genhtml_legend=1 00:09:59.569 --rc geninfo_all_blocks=1 00:09:59.569 --rc geninfo_unexecuted_blocks=1 00:09:59.569 00:09:59.569 ' 00:09:59.569 01:51:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:59.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.569 --rc genhtml_branch_coverage=1 00:09:59.569 --rc genhtml_function_coverage=1 00:09:59.569 --rc genhtml_legend=1 00:09:59.569 --rc geninfo_all_blocks=1 00:09:59.569 --rc geninfo_unexecuted_blocks=1 00:09:59.569 00:09:59.569 ' 00:09:59.569 01:51:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:59.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.569 --rc genhtml_branch_coverage=1 00:09:59.569 --rc genhtml_function_coverage=1 00:09:59.569 --rc genhtml_legend=1 00:09:59.569 --rc geninfo_all_blocks=1 00:09:59.569 --rc geninfo_unexecuted_blocks=1 00:09:59.569 00:09:59.569 ' 00:09:59.569 01:51:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:59.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.569 --rc genhtml_branch_coverage=1 00:09:59.569 --rc genhtml_function_coverage=1 00:09:59.569 --rc genhtml_legend=1 00:09:59.569 --rc geninfo_all_blocks=1 00:09:59.569 --rc geninfo_unexecuted_blocks=1 00:09:59.569 00:09:59.569 ' 00:09:59.569 01:51:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:59.569 01:51:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:59.569 01:51:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.569 01:51:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.569 01:51:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.569 01:51:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.569 01:51:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:59.570 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:59.570 ************************************ 00:09:59.570 START TEST nvmf_auth_target 00:09:59.570 ************************************ 00:09:59.570 01:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:59.570 * Looking for test storage... 00:09:59.570 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:59.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.570 --rc genhtml_branch_coverage=1 00:09:59.570 --rc genhtml_function_coverage=1 00:09:59.570 --rc genhtml_legend=1 00:09:59.570 --rc geninfo_all_blocks=1 00:09:59.570 --rc geninfo_unexecuted_blocks=1 00:09:59.570 00:09:59.570 ' 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:59.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.570 --rc genhtml_branch_coverage=1 00:09:59.570 --rc genhtml_function_coverage=1 00:09:59.570 --rc genhtml_legend=1 00:09:59.570 --rc geninfo_all_blocks=1 00:09:59.570 --rc geninfo_unexecuted_blocks=1 00:09:59.570 00:09:59.570 ' 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:59.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.570 --rc genhtml_branch_coverage=1 00:09:59.570 --rc genhtml_function_coverage=1 00:09:59.570 --rc genhtml_legend=1 00:09:59.570 --rc geninfo_all_blocks=1 00:09:59.570 --rc geninfo_unexecuted_blocks=1 00:09:59.570 00:09:59.570 ' 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:59.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.570 --rc genhtml_branch_coverage=1 00:09:59.570 --rc genhtml_function_coverage=1 00:09:59.570 --rc genhtml_legend=1 00:09:59.570 --rc geninfo_all_blocks=1 00:09:59.570 --rc geninfo_unexecuted_blocks=1 00:09:59.570 00:09:59.570 ' 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.570 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.571 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.571 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.571 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.571 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.571 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.571 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.571 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.571 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:09:59.571 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:09:59.571 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.571 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.571 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:59.571 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.571 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:59.571 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:59.571 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.830 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.830 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.830 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.830 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.830 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.830 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:59.831 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:59.831 Cannot find device "nvmf_init_br" 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:59.831 Cannot find device "nvmf_init_br2" 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:59.831 Cannot find device "nvmf_tgt_br" 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:59.831 Cannot find device "nvmf_tgt_br2" 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:59.831 Cannot find device "nvmf_init_br" 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:59.831 Cannot find device "nvmf_init_br2" 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:59.831 Cannot find device "nvmf_tgt_br" 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:59.831 Cannot find device "nvmf_tgt_br2" 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:59.831 Cannot find device "nvmf_br" 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:59.831 Cannot find device "nvmf_init_if" 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:59.831 Cannot find device "nvmf_init_if2" 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:59.831 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:59.831 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:59.831 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:09:59.832 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:59.832 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:59.832 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:59.832 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:59.832 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:59.832 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:59.832 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:59.832 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:59.832 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:59.832 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:59.832 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:00.091 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:00.091 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:00.091 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:00.091 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:00.091 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:00.091 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:00.091 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:00.091 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:00.091 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:00.091 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:00.091 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:00.091 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:00.091 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:00.091 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:00.091 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:00.091 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:00.092 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:00.092 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:00.092 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:00.092 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:00.092 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:00.092 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:00.092 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:00.092 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:10:00.092 00:10:00.092 --- 10.0.0.3 ping statistics --- 00:10:00.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.092 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:10:00.092 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:00.092 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:00.092 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:10:00.092 00:10:00.092 --- 10.0.0.4 ping statistics --- 00:10:00.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.092 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:10:00.092 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:00.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:00.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 00:10:00.092 00:10:00.092 --- 10.0.0.1 ping statistics --- 00:10:00.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.092 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:10:00.092 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:00.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:00.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:10:00.092 00:10:00.092 --- 10.0.0.2 ping statistics --- 00:10:00.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.092 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:10:00.092 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:00.092 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:10:00.092 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:00.092 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:00.092 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:00.092 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:00.092 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:00.092 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:00.092 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:00.092 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:10:00.092 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:00.092 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:00.092 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.092 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=79078 00:10:00.092 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 79078 00:10:00.092 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 79078 ']' 00:10:00.092 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:10:00.092 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.092 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:00.092 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.092 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:00.092 01:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=79110 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=edb221525ed99d4ed77acbd02b0910dfae6430083508d168 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.w1A 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key edb221525ed99d4ed77acbd02b0910dfae6430083508d168 0 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 edb221525ed99d4ed77acbd02b0910dfae6430083508d168 0 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=edb221525ed99d4ed77acbd02b0910dfae6430083508d168 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.w1A 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.w1A 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.w1A 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b6f6360ccde2f224e456f448b5528fed045ebdebaa7bfc0541d9117256706006 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.1Xo 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b6f6360ccde2f224e456f448b5528fed045ebdebaa7bfc0541d9117256706006 3 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b6f6360ccde2f224e456f448b5528fed045ebdebaa7bfc0541d9117256706006 3 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b6f6360ccde2f224e456f448b5528fed045ebdebaa7bfc0541d9117256706006 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.1Xo 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.1Xo 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.1Xo 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=19dd80a4f177ef792af00d3cfdbe640e 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.XC3 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 19dd80a4f177ef792af00d3cfdbe640e 1 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 19dd80a4f177ef792af00d3cfdbe640e 1 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=19dd80a4f177ef792af00d3cfdbe640e 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.XC3 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.XC3 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.XC3 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b43b300d7c2dcb502e19bfe2c536a6a26e96158d0018a8d7 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.XoE 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b43b300d7c2dcb502e19bfe2c536a6a26e96158d0018a8d7 2 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b43b300d7c2dcb502e19bfe2c536a6a26e96158d0018a8d7 2 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b43b300d7c2dcb502e19bfe2c536a6a26e96158d0018a8d7 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.XoE 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.XoE 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.XoE 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cf6835029b80e461dc5b854b770180af8fff0bab2b102dd2 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.nyF 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cf6835029b80e461dc5b854b770180af8fff0bab2b102dd2 2 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cf6835029b80e461dc5b854b770180af8fff0bab2b102dd2 2 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cf6835029b80e461dc5b854b770180af8fff0bab2b102dd2 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:10:01.472 01:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:01.472 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.nyF 00:10:01.472 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.nyF 00:10:01.472 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.nyF 00:10:01.472 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:10:01.472 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:01.472 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:01.472 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:01.472 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:10:01.472 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:10:01.472 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:01.472 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f32fb85c177d145943511642957bc896 00:10:01.473 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:10:01.473 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.L9q 00:10:01.473 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f32fb85c177d145943511642957bc896 1 00:10:01.473 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f32fb85c177d145943511642957bc896 1 00:10:01.473 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:01.473 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:01.473 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f32fb85c177d145943511642957bc896 00:10:01.473 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:10:01.473 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:01.732 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.L9q 00:10:01.732 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.L9q 00:10:01.732 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.L9q 00:10:01.732 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:10:01.732 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:01.732 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:01.732 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:01.732 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:10:01.732 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:10:01.732 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:01.732 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=945f2ba10d0a05d3bc1e6398b84fb613ace05d6cf493ac63e324283369829a60 00:10:01.732 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:10:01.732 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.dpn 00:10:01.732 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 945f2ba10d0a05d3bc1e6398b84fb613ace05d6cf493ac63e324283369829a60 3 00:10:01.732 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 945f2ba10d0a05d3bc1e6398b84fb613ace05d6cf493ac63e324283369829a60 3 00:10:01.732 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:01.732 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:01.732 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=945f2ba10d0a05d3bc1e6398b84fb613ace05d6cf493ac63e324283369829a60 00:10:01.732 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:10:01.732 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:01.732 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.dpn 00:10:01.732 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.dpn 00:10:01.732 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.dpn 00:10:01.732 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:10:01.732 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 79078 00:10:01.732 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 79078 ']' 00:10:01.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.732 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.732 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:01.732 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.732 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:01.732 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.991 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:01.991 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:10:01.991 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 79110 /var/tmp/host.sock 00:10:01.991 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 79110 ']' 00:10:01.991 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:10:01.991 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:01.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:01.991 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:01.991 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:01.991 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.251 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.251 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:10:02.251 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:10:02.251 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.251 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.251 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.251 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:02.251 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.w1A 00:10:02.251 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.251 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.251 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.251 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.w1A 00:10:02.251 01:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.w1A 00:10:02.510 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.1Xo ]] 00:10:02.510 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1Xo 00:10:02.510 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.510 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.510 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.510 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1Xo 00:10:02.510 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1Xo 00:10:02.769 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:02.769 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.XC3 00:10:02.769 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.769 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.028 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.028 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.XC3 00:10:03.028 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.XC3 00:10:03.028 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.XoE ]] 00:10:03.028 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.XoE 00:10:03.028 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.028 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.286 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.286 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.XoE 00:10:03.286 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.XoE 00:10:03.546 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:03.546 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.nyF 00:10:03.546 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.546 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.546 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.546 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.nyF 00:10:03.546 01:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.nyF 00:10:03.806 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.L9q ]] 00:10:03.806 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.L9q 00:10:03.806 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.806 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.806 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.806 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.L9q 00:10:03.806 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.L9q 00:10:04.065 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:04.065 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.dpn 00:10:04.065 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.065 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.065 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.065 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.dpn 00:10:04.065 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.dpn 00:10:04.345 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:10:04.345 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:10:04.345 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:04.345 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:04.345 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:04.345 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:04.645 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:10:04.645 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:04.645 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:04.646 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:04.646 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:04.646 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:04.646 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:04.646 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.646 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.646 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.646 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:04.646 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:04.646 01:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:04.919 00:10:04.919 01:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:04.919 01:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:04.919 01:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:05.178 01:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:05.178 01:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:05.178 01:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.178 01:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.178 01:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.178 01:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:05.178 { 00:10:05.178 "cntlid": 1, 00:10:05.178 "qid": 0, 00:10:05.178 "state": "enabled", 00:10:05.178 "thread": "nvmf_tgt_poll_group_000", 00:10:05.178 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:10:05.178 "listen_address": { 00:10:05.178 "trtype": "TCP", 00:10:05.178 "adrfam": "IPv4", 00:10:05.178 "traddr": "10.0.0.3", 00:10:05.178 "trsvcid": "4420" 00:10:05.178 }, 00:10:05.178 "peer_address": { 00:10:05.178 "trtype": "TCP", 00:10:05.178 "adrfam": "IPv4", 00:10:05.178 "traddr": "10.0.0.1", 00:10:05.178 "trsvcid": "44866" 00:10:05.178 }, 00:10:05.178 "auth": { 00:10:05.178 "state": "completed", 00:10:05.178 "digest": "sha256", 00:10:05.178 "dhgroup": "null" 00:10:05.178 } 00:10:05.178 } 00:10:05.178 ]' 00:10:05.178 01:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:05.178 01:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:05.178 01:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:05.178 01:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:05.178 01:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:05.437 01:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:05.437 01:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:05.438 01:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:05.697 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:10:05.697 01:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:10:09.882 01:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:09.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:09.883 01:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:10:09.883 01:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.883 01:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.883 01:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.883 01:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:09.883 01:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:09.883 01:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:10.142 01:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:10:10.142 01:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:10.142 01:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:10.142 01:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:10.142 01:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:10.142 01:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:10.142 01:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:10.142 01:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.142 01:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.142 01:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.142 01:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:10.142 01:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:10.142 01:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:10.401 00:10:10.659 01:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:10.659 01:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:10.659 01:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:10.918 01:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:10.918 01:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:10.918 01:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.918 01:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.918 01:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.918 01:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:10.918 { 00:10:10.918 "cntlid": 3, 00:10:10.918 "qid": 0, 00:10:10.918 "state": "enabled", 00:10:10.918 "thread": "nvmf_tgt_poll_group_000", 00:10:10.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:10:10.918 "listen_address": { 00:10:10.918 "trtype": "TCP", 00:10:10.918 "adrfam": "IPv4", 00:10:10.918 "traddr": "10.0.0.3", 00:10:10.918 "trsvcid": "4420" 00:10:10.918 }, 00:10:10.918 "peer_address": { 00:10:10.918 "trtype": "TCP", 00:10:10.918 "adrfam": "IPv4", 00:10:10.918 "traddr": "10.0.0.1", 00:10:10.918 "trsvcid": "35048" 00:10:10.918 }, 00:10:10.918 "auth": { 00:10:10.918 "state": "completed", 00:10:10.918 "digest": "sha256", 00:10:10.918 "dhgroup": "null" 00:10:10.918 } 00:10:10.918 } 00:10:10.918 ]' 00:10:10.918 01:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:10.918 01:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:10.918 01:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:10.918 01:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:10.918 01:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:10.918 01:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:10.918 01:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:10.918 01:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:11.177 01:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: --dhchap-ctrl-secret DHHC-1:02:YjQzYjMwMGQ3YzJkY2I1MDJlMTliZmUyYzUzNmE2YTI2ZTk2MTU4ZDAwMThhOGQ3aTCp0w==: 00:10:11.177 01:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: --dhchap-ctrl-secret DHHC-1:02:YjQzYjMwMGQ3YzJkY2I1MDJlMTliZmUyYzUzNmE2YTI2ZTk2MTU4ZDAwMThhOGQ3aTCp0w==: 00:10:12.112 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:12.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:12.112 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:10:12.112 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.112 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.112 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.112 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:12.112 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:12.113 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:12.371 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:10:12.371 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:12.371 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:12.372 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:12.372 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:12.372 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:12.372 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:12.372 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.372 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.372 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.372 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:12.372 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:12.372 01:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:12.631 00:10:12.631 01:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:12.631 01:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:12.631 01:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:13.197 01:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:13.197 01:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:13.197 01:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.197 01:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.197 01:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.197 01:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:13.197 { 00:10:13.197 "cntlid": 5, 00:10:13.197 "qid": 0, 00:10:13.197 "state": "enabled", 00:10:13.197 "thread": "nvmf_tgt_poll_group_000", 00:10:13.197 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:10:13.197 "listen_address": { 00:10:13.197 "trtype": "TCP", 00:10:13.197 "adrfam": "IPv4", 00:10:13.197 "traddr": "10.0.0.3", 00:10:13.197 "trsvcid": "4420" 00:10:13.197 }, 00:10:13.197 "peer_address": { 00:10:13.197 "trtype": "TCP", 00:10:13.197 "adrfam": "IPv4", 00:10:13.197 "traddr": "10.0.0.1", 00:10:13.197 "trsvcid": "35060" 00:10:13.197 }, 00:10:13.197 "auth": { 00:10:13.197 "state": "completed", 00:10:13.197 "digest": "sha256", 00:10:13.197 "dhgroup": "null" 00:10:13.197 } 00:10:13.197 } 00:10:13.197 ]' 00:10:13.197 01:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:13.197 01:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:13.198 01:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:13.198 01:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:13.198 01:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:13.198 01:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:13.198 01:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:13.198 01:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:13.456 01:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:01:ZjMyZmI4NWMxNzdkMTQ1OTQzNTExNjQyOTU3YmM4OTYH78Ik: 00:10:13.456 01:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:01:ZjMyZmI4NWMxNzdkMTQ1OTQzNTExNjQyOTU3YmM4OTYH78Ik: 00:10:14.024 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:14.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:14.024 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:10:14.024 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.024 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.024 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.024 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:14.024 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:14.024 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:14.283 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:10:14.283 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:14.283 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:14.283 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:14.283 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:14.283 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:14.283 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key3 00:10:14.283 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.283 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.283 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.283 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:14.283 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:14.283 01:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:14.850 00:10:14.850 01:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:14.850 01:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:14.850 01:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:15.108 01:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:15.108 01:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:15.108 01:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.108 01:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.108 01:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.108 01:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:15.108 { 00:10:15.108 "cntlid": 7, 00:10:15.108 "qid": 0, 00:10:15.108 "state": "enabled", 00:10:15.108 "thread": "nvmf_tgt_poll_group_000", 00:10:15.108 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:10:15.108 "listen_address": { 00:10:15.108 "trtype": "TCP", 00:10:15.108 "adrfam": "IPv4", 00:10:15.108 "traddr": "10.0.0.3", 00:10:15.108 "trsvcid": "4420" 00:10:15.108 }, 00:10:15.108 "peer_address": { 00:10:15.108 "trtype": "TCP", 00:10:15.108 "adrfam": "IPv4", 00:10:15.108 "traddr": "10.0.0.1", 00:10:15.108 "trsvcid": "35094" 00:10:15.108 }, 00:10:15.108 "auth": { 00:10:15.108 "state": "completed", 00:10:15.108 "digest": "sha256", 00:10:15.108 "dhgroup": "null" 00:10:15.108 } 00:10:15.108 } 00:10:15.108 ]' 00:10:15.108 01:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:15.108 01:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:15.108 01:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:15.108 01:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:15.108 01:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:15.108 01:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:15.108 01:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:15.108 01:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:15.367 01:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:10:15.367 01:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:10:15.934 01:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:16.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:16.194 01:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:10:16.194 01:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.194 01:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.194 01:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.194 01:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:16.194 01:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:16.194 01:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:16.194 01:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:16.453 01:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:10:16.453 01:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:16.453 01:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:16.453 01:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:16.453 01:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:16.453 01:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:16.453 01:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:16.453 01:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.453 01:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.453 01:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.453 01:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:16.453 01:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:16.453 01:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:16.712 00:10:16.712 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:16.712 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:16.712 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:16.970 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:16.970 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:16.971 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.971 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.971 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.971 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:16.971 { 00:10:16.971 "cntlid": 9, 00:10:16.971 "qid": 0, 00:10:16.971 "state": "enabled", 00:10:16.971 "thread": "nvmf_tgt_poll_group_000", 00:10:16.971 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:10:16.971 "listen_address": { 00:10:16.971 "trtype": "TCP", 00:10:16.971 "adrfam": "IPv4", 00:10:16.971 "traddr": "10.0.0.3", 00:10:16.971 "trsvcid": "4420" 00:10:16.971 }, 00:10:16.971 "peer_address": { 00:10:16.971 "trtype": "TCP", 00:10:16.971 "adrfam": "IPv4", 00:10:16.971 "traddr": "10.0.0.1", 00:10:16.971 "trsvcid": "51448" 00:10:16.971 }, 00:10:16.971 "auth": { 00:10:16.971 "state": "completed", 00:10:16.971 "digest": "sha256", 00:10:16.971 "dhgroup": "ffdhe2048" 00:10:16.971 } 00:10:16.971 } 00:10:16.971 ]' 00:10:16.971 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:16.971 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:16.971 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:17.229 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:17.229 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:17.229 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:17.229 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:17.229 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:17.488 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:10:17.488 01:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:10:18.055 01:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:18.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:18.055 01:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:10:18.055 01:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.055 01:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.055 01:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.055 01:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:18.055 01:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:18.055 01:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:18.314 01:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:10:18.314 01:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:18.314 01:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:18.314 01:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:18.314 01:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:18.314 01:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:18.314 01:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:18.314 01:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.314 01:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.573 01:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.573 01:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:18.573 01:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:18.573 01:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:18.831 00:10:18.831 01:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:18.831 01:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:18.831 01:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:19.101 01:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:19.101 01:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:19.101 01:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.101 01:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.101 01:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.101 01:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:19.101 { 00:10:19.101 "cntlid": 11, 00:10:19.101 "qid": 0, 00:10:19.101 "state": "enabled", 00:10:19.101 "thread": "nvmf_tgt_poll_group_000", 00:10:19.101 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:10:19.101 "listen_address": { 00:10:19.101 "trtype": "TCP", 00:10:19.101 "adrfam": "IPv4", 00:10:19.101 "traddr": "10.0.0.3", 00:10:19.101 "trsvcid": "4420" 00:10:19.101 }, 00:10:19.101 "peer_address": { 00:10:19.101 "trtype": "TCP", 00:10:19.101 "adrfam": "IPv4", 00:10:19.101 "traddr": "10.0.0.1", 00:10:19.101 "trsvcid": "51472" 00:10:19.101 }, 00:10:19.101 "auth": { 00:10:19.101 "state": "completed", 00:10:19.101 "digest": "sha256", 00:10:19.101 "dhgroup": "ffdhe2048" 00:10:19.101 } 00:10:19.101 } 00:10:19.101 ]' 00:10:19.101 01:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:19.101 01:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:19.101 01:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:19.101 01:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:19.101 01:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:19.389 01:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:19.389 01:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:19.389 01:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:19.389 01:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: --dhchap-ctrl-secret DHHC-1:02:YjQzYjMwMGQ3YzJkY2I1MDJlMTliZmUyYzUzNmE2YTI2ZTk2MTU4ZDAwMThhOGQ3aTCp0w==: 00:10:19.389 01:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: --dhchap-ctrl-secret DHHC-1:02:YjQzYjMwMGQ3YzJkY2I1MDJlMTliZmUyYzUzNmE2YTI2ZTk2MTU4ZDAwMThhOGQ3aTCp0w==: 00:10:20.332 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:20.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:20.332 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:10:20.332 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.332 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.332 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.332 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:20.332 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:20.332 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:20.332 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:10:20.332 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:20.332 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:20.332 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:20.332 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:20.332 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:20.332 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:20.332 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.332 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.332 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.332 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:20.332 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:20.332 01:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:20.900 00:10:20.900 01:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:20.900 01:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:20.900 01:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:21.160 01:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:21.160 01:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:21.160 01:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.160 01:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.160 01:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.160 01:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:21.160 { 00:10:21.160 "cntlid": 13, 00:10:21.160 "qid": 0, 00:10:21.160 "state": "enabled", 00:10:21.160 "thread": "nvmf_tgt_poll_group_000", 00:10:21.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:10:21.160 "listen_address": { 00:10:21.160 "trtype": "TCP", 00:10:21.160 "adrfam": "IPv4", 00:10:21.160 "traddr": "10.0.0.3", 00:10:21.160 "trsvcid": "4420" 00:10:21.160 }, 00:10:21.160 "peer_address": { 00:10:21.160 "trtype": "TCP", 00:10:21.160 "adrfam": "IPv4", 00:10:21.160 "traddr": "10.0.0.1", 00:10:21.160 "trsvcid": "51490" 00:10:21.160 }, 00:10:21.160 "auth": { 00:10:21.160 "state": "completed", 00:10:21.160 "digest": "sha256", 00:10:21.160 "dhgroup": "ffdhe2048" 00:10:21.160 } 00:10:21.160 } 00:10:21.160 ]' 00:10:21.160 01:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:21.160 01:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:21.160 01:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:21.160 01:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:21.160 01:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:21.160 01:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:21.160 01:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:21.160 01:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:21.728 01:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:01:ZjMyZmI4NWMxNzdkMTQ1OTQzNTExNjQyOTU3YmM4OTYH78Ik: 00:10:21.728 01:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:01:ZjMyZmI4NWMxNzdkMTQ1OTQzNTExNjQyOTU3YmM4OTYH78Ik: 00:10:22.296 01:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:22.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:22.297 01:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:10:22.297 01:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.297 01:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.297 01:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.297 01:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:22.297 01:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:22.297 01:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:22.556 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:10:22.556 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:22.556 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:22.556 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:22.556 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:22.556 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:22.556 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key3 00:10:22.556 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.556 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.556 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.556 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:22.556 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:22.556 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:22.815 00:10:22.815 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:22.815 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:22.815 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:23.074 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:23.074 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:23.074 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.074 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.074 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.074 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:23.074 { 00:10:23.074 "cntlid": 15, 00:10:23.074 "qid": 0, 00:10:23.074 "state": "enabled", 00:10:23.074 "thread": "nvmf_tgt_poll_group_000", 00:10:23.074 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:10:23.074 "listen_address": { 00:10:23.074 "trtype": "TCP", 00:10:23.074 "adrfam": "IPv4", 00:10:23.074 "traddr": "10.0.0.3", 00:10:23.074 "trsvcid": "4420" 00:10:23.074 }, 00:10:23.074 "peer_address": { 00:10:23.074 "trtype": "TCP", 00:10:23.074 "adrfam": "IPv4", 00:10:23.074 "traddr": "10.0.0.1", 00:10:23.074 "trsvcid": "51524" 00:10:23.074 }, 00:10:23.074 "auth": { 00:10:23.074 "state": "completed", 00:10:23.074 "digest": "sha256", 00:10:23.074 "dhgroup": "ffdhe2048" 00:10:23.074 } 00:10:23.074 } 00:10:23.074 ]' 00:10:23.074 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:23.074 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:23.074 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:23.334 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:23.334 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:23.334 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:23.334 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:23.334 01:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:23.594 01:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:10:23.594 01:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:10:24.162 01:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:24.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:24.162 01:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:10:24.162 01:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.162 01:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.162 01:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.162 01:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:24.162 01:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:24.162 01:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:24.162 01:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:24.421 01:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:10:24.421 01:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:24.421 01:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:24.421 01:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:24.421 01:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:24.421 01:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:24.421 01:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:24.421 01:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.421 01:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.421 01:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.421 01:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:24.421 01:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:24.421 01:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:24.989 00:10:24.989 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:24.989 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:24.989 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:24.989 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:24.989 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:24.989 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.989 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.989 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.989 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:24.989 { 00:10:24.989 "cntlid": 17, 00:10:24.989 "qid": 0, 00:10:24.989 "state": "enabled", 00:10:24.989 "thread": "nvmf_tgt_poll_group_000", 00:10:24.989 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:10:24.989 "listen_address": { 00:10:24.989 "trtype": "TCP", 00:10:24.989 "adrfam": "IPv4", 00:10:24.989 "traddr": "10.0.0.3", 00:10:24.989 "trsvcid": "4420" 00:10:24.989 }, 00:10:24.989 "peer_address": { 00:10:24.989 "trtype": "TCP", 00:10:24.989 "adrfam": "IPv4", 00:10:24.989 "traddr": "10.0.0.1", 00:10:24.989 "trsvcid": "51548" 00:10:24.989 }, 00:10:24.989 "auth": { 00:10:24.989 "state": "completed", 00:10:24.989 "digest": "sha256", 00:10:24.989 "dhgroup": "ffdhe3072" 00:10:24.989 } 00:10:24.989 } 00:10:24.989 ]' 00:10:24.989 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:25.249 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:25.249 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:25.249 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:25.249 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:25.249 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:25.249 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:25.249 01:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:25.508 01:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:10:25.508 01:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:10:26.444 01:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:26.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:26.444 01:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:10:26.444 01:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.444 01:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.444 01:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.444 01:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:26.444 01:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:26.444 01:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:26.702 01:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:10:26.702 01:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:26.702 01:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:26.702 01:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:26.702 01:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:26.702 01:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:26.702 01:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:26.702 01:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.702 01:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.702 01:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.702 01:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:26.703 01:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:26.703 01:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:26.961 00:10:26.961 01:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:26.961 01:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:26.961 01:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:27.220 01:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:27.220 01:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:27.220 01:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.220 01:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.220 01:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.220 01:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:27.220 { 00:10:27.220 "cntlid": 19, 00:10:27.220 "qid": 0, 00:10:27.220 "state": "enabled", 00:10:27.220 "thread": "nvmf_tgt_poll_group_000", 00:10:27.220 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:10:27.220 "listen_address": { 00:10:27.220 "trtype": "TCP", 00:10:27.220 "adrfam": "IPv4", 00:10:27.220 "traddr": "10.0.0.3", 00:10:27.220 "trsvcid": "4420" 00:10:27.220 }, 00:10:27.220 "peer_address": { 00:10:27.220 "trtype": "TCP", 00:10:27.220 "adrfam": "IPv4", 00:10:27.220 "traddr": "10.0.0.1", 00:10:27.220 "trsvcid": "51480" 00:10:27.220 }, 00:10:27.220 "auth": { 00:10:27.220 "state": "completed", 00:10:27.220 "digest": "sha256", 00:10:27.220 "dhgroup": "ffdhe3072" 00:10:27.220 } 00:10:27.220 } 00:10:27.220 ]' 00:10:27.220 01:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:27.479 01:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:27.479 01:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:27.479 01:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:27.479 01:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:27.479 01:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:27.479 01:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:27.479 01:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:27.738 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: --dhchap-ctrl-secret DHHC-1:02:YjQzYjMwMGQ3YzJkY2I1MDJlMTliZmUyYzUzNmE2YTI2ZTk2MTU4ZDAwMThhOGQ3aTCp0w==: 00:10:27.738 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: --dhchap-ctrl-secret DHHC-1:02:YjQzYjMwMGQ3YzJkY2I1MDJlMTliZmUyYzUzNmE2YTI2ZTk2MTU4ZDAwMThhOGQ3aTCp0w==: 00:10:28.673 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:28.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:28.673 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:10:28.673 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.673 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.673 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.673 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:28.673 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:28.673 01:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:28.673 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:10:28.673 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:28.673 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:28.673 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:28.673 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:28.673 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:28.673 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:28.673 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.673 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.673 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.673 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:28.673 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:28.673 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:29.241 00:10:29.241 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:29.241 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:29.241 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:29.241 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:29.241 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:29.241 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.241 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.241 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.241 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:29.241 { 00:10:29.241 "cntlid": 21, 00:10:29.241 "qid": 0, 00:10:29.241 "state": "enabled", 00:10:29.241 "thread": "nvmf_tgt_poll_group_000", 00:10:29.241 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:10:29.241 "listen_address": { 00:10:29.241 "trtype": "TCP", 00:10:29.241 "adrfam": "IPv4", 00:10:29.241 "traddr": "10.0.0.3", 00:10:29.241 "trsvcid": "4420" 00:10:29.241 }, 00:10:29.241 "peer_address": { 00:10:29.241 "trtype": "TCP", 00:10:29.241 "adrfam": "IPv4", 00:10:29.241 "traddr": "10.0.0.1", 00:10:29.241 "trsvcid": "51510" 00:10:29.241 }, 00:10:29.241 "auth": { 00:10:29.241 "state": "completed", 00:10:29.241 "digest": "sha256", 00:10:29.241 "dhgroup": "ffdhe3072" 00:10:29.241 } 00:10:29.241 } 00:10:29.241 ]' 00:10:29.241 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:29.499 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:29.499 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:29.499 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:29.499 01:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:29.499 01:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:29.500 01:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:29.500 01:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:29.758 01:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:01:ZjMyZmI4NWMxNzdkMTQ1OTQzNTExNjQyOTU3YmM4OTYH78Ik: 00:10:29.758 01:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:01:ZjMyZmI4NWMxNzdkMTQ1OTQzNTExNjQyOTU3YmM4OTYH78Ik: 00:10:30.695 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:30.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:30.695 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:10:30.695 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.695 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.695 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.695 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:30.695 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:30.695 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:30.955 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:10:30.955 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:30.955 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:30.955 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:30.955 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:30.955 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:30.955 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key3 00:10:30.955 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.955 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.955 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.955 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:30.955 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:30.955 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:31.214 00:10:31.214 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:31.214 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:31.214 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:31.473 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:31.473 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:31.473 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.473 01:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.473 01:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.473 01:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:31.473 { 00:10:31.473 "cntlid": 23, 00:10:31.473 "qid": 0, 00:10:31.473 "state": "enabled", 00:10:31.473 "thread": "nvmf_tgt_poll_group_000", 00:10:31.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:10:31.473 "listen_address": { 00:10:31.473 "trtype": "TCP", 00:10:31.473 "adrfam": "IPv4", 00:10:31.473 "traddr": "10.0.0.3", 00:10:31.473 "trsvcid": "4420" 00:10:31.473 }, 00:10:31.473 "peer_address": { 00:10:31.473 "trtype": "TCP", 00:10:31.473 "adrfam": "IPv4", 00:10:31.473 "traddr": "10.0.0.1", 00:10:31.473 "trsvcid": "51536" 00:10:31.473 }, 00:10:31.473 "auth": { 00:10:31.473 "state": "completed", 00:10:31.473 "digest": "sha256", 00:10:31.473 "dhgroup": "ffdhe3072" 00:10:31.473 } 00:10:31.473 } 00:10:31.473 ]' 00:10:31.473 01:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:31.473 01:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:31.473 01:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:31.733 01:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:31.733 01:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:31.733 01:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:31.733 01:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:31.733 01:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:31.992 01:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:10:31.992 01:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:10:32.561 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:32.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:32.561 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:10:32.561 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.561 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.561 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.561 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:32.561 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:32.561 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:32.561 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:32.820 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:10:32.820 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:32.820 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:32.820 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:32.820 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:32.820 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:32.820 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:32.820 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.820 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.820 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.820 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:32.820 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:32.820 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:33.079 00:10:33.079 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:33.079 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:33.079 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:33.338 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:33.338 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:33.338 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.338 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.338 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.338 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:33.338 { 00:10:33.338 "cntlid": 25, 00:10:33.338 "qid": 0, 00:10:33.338 "state": "enabled", 00:10:33.338 "thread": "nvmf_tgt_poll_group_000", 00:10:33.338 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:10:33.338 "listen_address": { 00:10:33.338 "trtype": "TCP", 00:10:33.338 "adrfam": "IPv4", 00:10:33.338 "traddr": "10.0.0.3", 00:10:33.338 "trsvcid": "4420" 00:10:33.338 }, 00:10:33.338 "peer_address": { 00:10:33.338 "trtype": "TCP", 00:10:33.338 "adrfam": "IPv4", 00:10:33.338 "traddr": "10.0.0.1", 00:10:33.338 "trsvcid": "51574" 00:10:33.338 }, 00:10:33.338 "auth": { 00:10:33.338 "state": "completed", 00:10:33.338 "digest": "sha256", 00:10:33.338 "dhgroup": "ffdhe4096" 00:10:33.338 } 00:10:33.338 } 00:10:33.338 ]' 00:10:33.338 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:33.597 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:33.597 01:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:33.597 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:33.597 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:33.597 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:33.597 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:33.597 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:33.856 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:10:33.856 01:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:10:34.424 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:34.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:34.424 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:10:34.425 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.425 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.425 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.425 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:34.425 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:34.425 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:34.993 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:10:34.993 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:34.993 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:34.993 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:34.993 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:34.993 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:34.993 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:34.993 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.993 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.993 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.993 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:34.993 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:34.993 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:35.252 00:10:35.252 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:35.252 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:35.252 01:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:35.511 01:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:35.511 01:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:35.511 01:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.511 01:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.511 01:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.511 01:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:35.511 { 00:10:35.511 "cntlid": 27, 00:10:35.511 "qid": 0, 00:10:35.511 "state": "enabled", 00:10:35.511 "thread": "nvmf_tgt_poll_group_000", 00:10:35.511 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:10:35.512 "listen_address": { 00:10:35.512 "trtype": "TCP", 00:10:35.512 "adrfam": "IPv4", 00:10:35.512 "traddr": "10.0.0.3", 00:10:35.512 "trsvcid": "4420" 00:10:35.512 }, 00:10:35.512 "peer_address": { 00:10:35.512 "trtype": "TCP", 00:10:35.512 "adrfam": "IPv4", 00:10:35.512 "traddr": "10.0.0.1", 00:10:35.512 "trsvcid": "51598" 00:10:35.512 }, 00:10:35.512 "auth": { 00:10:35.512 "state": "completed", 00:10:35.512 "digest": "sha256", 00:10:35.512 "dhgroup": "ffdhe4096" 00:10:35.512 } 00:10:35.512 } 00:10:35.512 ]' 00:10:35.512 01:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:35.512 01:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:35.512 01:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:35.771 01:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:35.771 01:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:35.771 01:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:35.771 01:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:35.771 01:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:36.031 01:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: --dhchap-ctrl-secret DHHC-1:02:YjQzYjMwMGQ3YzJkY2I1MDJlMTliZmUyYzUzNmE2YTI2ZTk2MTU4ZDAwMThhOGQ3aTCp0w==: 00:10:36.031 01:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: --dhchap-ctrl-secret DHHC-1:02:YjQzYjMwMGQ3YzJkY2I1MDJlMTliZmUyYzUzNmE2YTI2ZTk2MTU4ZDAwMThhOGQ3aTCp0w==: 00:10:36.600 01:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:36.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:36.600 01:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:10:36.859 01:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.859 01:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.859 01:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.859 01:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:36.859 01:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:36.859 01:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:37.118 01:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:10:37.118 01:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:37.118 01:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:37.118 01:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:37.118 01:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:37.118 01:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:37.118 01:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:37.118 01:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.118 01:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.118 01:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.118 01:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:37.118 01:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:37.118 01:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:37.378 00:10:37.378 01:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:37.378 01:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:37.378 01:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:37.945 01:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:37.945 01:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:37.945 01:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.945 01:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.945 01:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.945 01:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:37.945 { 00:10:37.945 "cntlid": 29, 00:10:37.945 "qid": 0, 00:10:37.945 "state": "enabled", 00:10:37.945 "thread": "nvmf_tgt_poll_group_000", 00:10:37.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:10:37.945 "listen_address": { 00:10:37.945 "trtype": "TCP", 00:10:37.945 "adrfam": "IPv4", 00:10:37.945 "traddr": "10.0.0.3", 00:10:37.945 "trsvcid": "4420" 00:10:37.945 }, 00:10:37.945 "peer_address": { 00:10:37.945 "trtype": "TCP", 00:10:37.945 "adrfam": "IPv4", 00:10:37.945 "traddr": "10.0.0.1", 00:10:37.945 "trsvcid": "35302" 00:10:37.945 }, 00:10:37.945 "auth": { 00:10:37.945 "state": "completed", 00:10:37.945 "digest": "sha256", 00:10:37.945 "dhgroup": "ffdhe4096" 00:10:37.945 } 00:10:37.945 } 00:10:37.945 ]' 00:10:37.945 01:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:37.945 01:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:37.945 01:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:37.945 01:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:37.945 01:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:37.945 01:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:37.945 01:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:37.945 01:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:38.204 01:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:01:ZjMyZmI4NWMxNzdkMTQ1OTQzNTExNjQyOTU3YmM4OTYH78Ik: 00:10:38.204 01:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:01:ZjMyZmI4NWMxNzdkMTQ1OTQzNTExNjQyOTU3YmM4OTYH78Ik: 00:10:38.774 01:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:38.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:38.774 01:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:10:38.774 01:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.774 01:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.774 01:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.774 01:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:38.774 01:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:38.774 01:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:39.343 01:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:10:39.343 01:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:39.343 01:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:39.343 01:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:39.343 01:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:39.343 01:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:39.343 01:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key3 00:10:39.343 01:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.343 01:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.343 01:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.343 01:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:39.343 01:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:39.343 01:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:39.602 00:10:39.602 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:39.602 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:39.602 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:39.861 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:39.861 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:39.861 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.861 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.861 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.861 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:39.861 { 00:10:39.861 "cntlid": 31, 00:10:39.861 "qid": 0, 00:10:39.861 "state": "enabled", 00:10:39.861 "thread": "nvmf_tgt_poll_group_000", 00:10:39.861 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:10:39.861 "listen_address": { 00:10:39.861 "trtype": "TCP", 00:10:39.861 "adrfam": "IPv4", 00:10:39.861 "traddr": "10.0.0.3", 00:10:39.861 "trsvcid": "4420" 00:10:39.861 }, 00:10:39.861 "peer_address": { 00:10:39.861 "trtype": "TCP", 00:10:39.861 "adrfam": "IPv4", 00:10:39.861 "traddr": "10.0.0.1", 00:10:39.861 "trsvcid": "35326" 00:10:39.861 }, 00:10:39.861 "auth": { 00:10:39.861 "state": "completed", 00:10:39.861 "digest": "sha256", 00:10:39.861 "dhgroup": "ffdhe4096" 00:10:39.861 } 00:10:39.861 } 00:10:39.861 ]' 00:10:39.861 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:39.861 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:39.861 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:40.119 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:40.119 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:40.119 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:40.119 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:40.119 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:40.379 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:10:40.379 01:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:10:40.947 01:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:40.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:40.947 01:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:10:40.947 01:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.947 01:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.947 01:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.947 01:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:40.947 01:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:40.947 01:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:40.947 01:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:41.516 01:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:10:41.516 01:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:41.516 01:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:41.516 01:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:41.516 01:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:41.516 01:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:41.516 01:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:41.516 01:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.516 01:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.516 01:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.516 01:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:41.516 01:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:41.516 01:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:41.775 00:10:41.775 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:41.775 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:41.775 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:42.034 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:42.034 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:42.034 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.034 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.034 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.034 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:42.034 { 00:10:42.034 "cntlid": 33, 00:10:42.034 "qid": 0, 00:10:42.034 "state": "enabled", 00:10:42.034 "thread": "nvmf_tgt_poll_group_000", 00:10:42.034 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:10:42.034 "listen_address": { 00:10:42.034 "trtype": "TCP", 00:10:42.034 "adrfam": "IPv4", 00:10:42.034 "traddr": "10.0.0.3", 00:10:42.034 "trsvcid": "4420" 00:10:42.034 }, 00:10:42.034 "peer_address": { 00:10:42.034 "trtype": "TCP", 00:10:42.034 "adrfam": "IPv4", 00:10:42.034 "traddr": "10.0.0.1", 00:10:42.034 "trsvcid": "35360" 00:10:42.034 }, 00:10:42.034 "auth": { 00:10:42.034 "state": "completed", 00:10:42.034 "digest": "sha256", 00:10:42.034 "dhgroup": "ffdhe6144" 00:10:42.034 } 00:10:42.034 } 00:10:42.034 ]' 00:10:42.034 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:42.292 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:42.292 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:42.292 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:42.292 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:42.292 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:42.292 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:42.292 01:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:42.551 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:10:42.551 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:10:43.118 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:43.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:43.118 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:10:43.118 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.118 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.118 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.118 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:43.118 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:43.118 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:43.377 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:10:43.377 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:43.377 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:43.377 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:43.377 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:43.377 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:43.377 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:43.377 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.377 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.377 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.377 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:43.377 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:43.377 01:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:43.945 00:10:43.945 01:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:43.945 01:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:43.945 01:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:44.204 01:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:44.204 01:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:44.204 01:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.204 01:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.204 01:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.204 01:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:44.204 { 00:10:44.204 "cntlid": 35, 00:10:44.204 "qid": 0, 00:10:44.204 "state": "enabled", 00:10:44.204 "thread": "nvmf_tgt_poll_group_000", 00:10:44.204 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:10:44.204 "listen_address": { 00:10:44.204 "trtype": "TCP", 00:10:44.204 "adrfam": "IPv4", 00:10:44.204 "traddr": "10.0.0.3", 00:10:44.204 "trsvcid": "4420" 00:10:44.204 }, 00:10:44.204 "peer_address": { 00:10:44.204 "trtype": "TCP", 00:10:44.204 "adrfam": "IPv4", 00:10:44.204 "traddr": "10.0.0.1", 00:10:44.204 "trsvcid": "35394" 00:10:44.204 }, 00:10:44.204 "auth": { 00:10:44.204 "state": "completed", 00:10:44.204 "digest": "sha256", 00:10:44.204 "dhgroup": "ffdhe6144" 00:10:44.204 } 00:10:44.204 } 00:10:44.204 ]' 00:10:44.204 01:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:44.204 01:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:44.204 01:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:44.204 01:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:44.204 01:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:44.463 01:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:44.463 01:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:44.463 01:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:44.722 01:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: --dhchap-ctrl-secret DHHC-1:02:YjQzYjMwMGQ3YzJkY2I1MDJlMTliZmUyYzUzNmE2YTI2ZTk2MTU4ZDAwMThhOGQ3aTCp0w==: 00:10:44.722 01:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: --dhchap-ctrl-secret DHHC-1:02:YjQzYjMwMGQ3YzJkY2I1MDJlMTliZmUyYzUzNmE2YTI2ZTk2MTU4ZDAwMThhOGQ3aTCp0w==: 00:10:45.312 01:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:45.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:45.312 01:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:10:45.312 01:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.312 01:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.312 01:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.312 01:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:45.312 01:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:45.312 01:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:45.571 01:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:10:45.571 01:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:45.571 01:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:45.571 01:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:45.571 01:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:45.571 01:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:45.571 01:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:45.571 01:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.571 01:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.571 01:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.571 01:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:45.571 01:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:45.571 01:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:46.139 00:10:46.139 01:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:46.139 01:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:46.139 01:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:46.398 01:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:46.398 01:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:46.398 01:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.398 01:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.398 01:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.398 01:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:46.398 { 00:10:46.398 "cntlid": 37, 00:10:46.398 "qid": 0, 00:10:46.398 "state": "enabled", 00:10:46.398 "thread": "nvmf_tgt_poll_group_000", 00:10:46.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:10:46.398 "listen_address": { 00:10:46.398 "trtype": "TCP", 00:10:46.398 "adrfam": "IPv4", 00:10:46.398 "traddr": "10.0.0.3", 00:10:46.398 "trsvcid": "4420" 00:10:46.398 }, 00:10:46.398 "peer_address": { 00:10:46.398 "trtype": "TCP", 00:10:46.398 "adrfam": "IPv4", 00:10:46.398 "traddr": "10.0.0.1", 00:10:46.398 "trsvcid": "36770" 00:10:46.398 }, 00:10:46.398 "auth": { 00:10:46.398 "state": "completed", 00:10:46.398 "digest": "sha256", 00:10:46.398 "dhgroup": "ffdhe6144" 00:10:46.398 } 00:10:46.398 } 00:10:46.398 ]' 00:10:46.398 01:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:46.398 01:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:46.398 01:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:46.398 01:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:46.398 01:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:46.398 01:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:46.398 01:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:46.398 01:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:46.966 01:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:01:ZjMyZmI4NWMxNzdkMTQ1OTQzNTExNjQyOTU3YmM4OTYH78Ik: 00:10:46.966 01:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:01:ZjMyZmI4NWMxNzdkMTQ1OTQzNTExNjQyOTU3YmM4OTYH78Ik: 00:10:47.533 01:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:47.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:47.533 01:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:10:47.533 01:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.533 01:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.533 01:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.533 01:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:47.533 01:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:47.533 01:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:47.792 01:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:10:47.792 01:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:47.792 01:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:47.792 01:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:47.792 01:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:47.792 01:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:47.792 01:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key3 00:10:47.792 01:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.792 01:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.792 01:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.792 01:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:47.792 01:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:47.792 01:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:48.359 00:10:48.359 01:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:48.359 01:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:48.359 01:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:48.618 01:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:48.618 01:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:48.618 01:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.618 01:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.618 01:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.618 01:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:48.618 { 00:10:48.618 "cntlid": 39, 00:10:48.618 "qid": 0, 00:10:48.618 "state": "enabled", 00:10:48.618 "thread": "nvmf_tgt_poll_group_000", 00:10:48.618 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:10:48.618 "listen_address": { 00:10:48.618 "trtype": "TCP", 00:10:48.618 "adrfam": "IPv4", 00:10:48.618 "traddr": "10.0.0.3", 00:10:48.618 "trsvcid": "4420" 00:10:48.618 }, 00:10:48.618 "peer_address": { 00:10:48.618 "trtype": "TCP", 00:10:48.618 "adrfam": "IPv4", 00:10:48.618 "traddr": "10.0.0.1", 00:10:48.618 "trsvcid": "36804" 00:10:48.618 }, 00:10:48.618 "auth": { 00:10:48.618 "state": "completed", 00:10:48.618 "digest": "sha256", 00:10:48.618 "dhgroup": "ffdhe6144" 00:10:48.618 } 00:10:48.618 } 00:10:48.618 ]' 00:10:48.618 01:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:48.618 01:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:48.618 01:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:48.618 01:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:48.618 01:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:48.618 01:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:48.618 01:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:48.618 01:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:49.184 01:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:10:49.184 01:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:10:49.752 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:49.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:49.752 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:10:49.752 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.752 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.752 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.752 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:49.752 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:49.752 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:49.752 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:50.011 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:10:50.011 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:50.011 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:50.011 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:50.011 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:50.011 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:50.011 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:50.011 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.011 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.011 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.011 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:50.011 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:50.011 01:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:50.945 00:10:50.945 01:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:50.945 01:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:50.945 01:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:51.204 01:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:51.204 01:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:51.204 01:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.204 01:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.204 01:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.204 01:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:51.204 { 00:10:51.204 "cntlid": 41, 00:10:51.204 "qid": 0, 00:10:51.204 "state": "enabled", 00:10:51.204 "thread": "nvmf_tgt_poll_group_000", 00:10:51.204 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:10:51.204 "listen_address": { 00:10:51.204 "trtype": "TCP", 00:10:51.204 "adrfam": "IPv4", 00:10:51.204 "traddr": "10.0.0.3", 00:10:51.204 "trsvcid": "4420" 00:10:51.204 }, 00:10:51.204 "peer_address": { 00:10:51.204 "trtype": "TCP", 00:10:51.204 "adrfam": "IPv4", 00:10:51.204 "traddr": "10.0.0.1", 00:10:51.204 "trsvcid": "36836" 00:10:51.204 }, 00:10:51.204 "auth": { 00:10:51.204 "state": "completed", 00:10:51.204 "digest": "sha256", 00:10:51.204 "dhgroup": "ffdhe8192" 00:10:51.204 } 00:10:51.204 } 00:10:51.204 ]' 00:10:51.204 01:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:51.204 01:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:51.204 01:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:51.204 01:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:51.204 01:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:51.204 01:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:51.204 01:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:51.204 01:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:51.772 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:10:51.772 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:10:52.340 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:52.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:52.340 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:10:52.340 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.340 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.340 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.340 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:52.340 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:52.340 01:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:52.907 01:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:10:52.907 01:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:52.907 01:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:52.907 01:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:52.907 01:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:52.907 01:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:52.907 01:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:52.907 01:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.907 01:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.907 01:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.907 01:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:52.907 01:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:52.907 01:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:53.474 00:10:53.474 01:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:53.474 01:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:53.474 01:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:53.733 01:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:53.733 01:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:53.733 01:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.733 01:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.733 01:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.733 01:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:53.733 { 00:10:53.733 "cntlid": 43, 00:10:53.733 "qid": 0, 00:10:53.733 "state": "enabled", 00:10:53.733 "thread": "nvmf_tgt_poll_group_000", 00:10:53.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:10:53.733 "listen_address": { 00:10:53.733 "trtype": "TCP", 00:10:53.733 "adrfam": "IPv4", 00:10:53.733 "traddr": "10.0.0.3", 00:10:53.733 "trsvcid": "4420" 00:10:53.733 }, 00:10:53.733 "peer_address": { 00:10:53.733 "trtype": "TCP", 00:10:53.733 "adrfam": "IPv4", 00:10:53.733 "traddr": "10.0.0.1", 00:10:53.733 "trsvcid": "36848" 00:10:53.733 }, 00:10:53.733 "auth": { 00:10:53.733 "state": "completed", 00:10:53.733 "digest": "sha256", 00:10:53.733 "dhgroup": "ffdhe8192" 00:10:53.733 } 00:10:53.733 } 00:10:53.733 ]' 00:10:53.733 01:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:53.733 01:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:53.733 01:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:53.992 01:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:53.992 01:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:53.992 01:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:53.992 01:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:53.992 01:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:54.250 01:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: --dhchap-ctrl-secret DHHC-1:02:YjQzYjMwMGQ3YzJkY2I1MDJlMTliZmUyYzUzNmE2YTI2ZTk2MTU4ZDAwMThhOGQ3aTCp0w==: 00:10:54.250 01:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: --dhchap-ctrl-secret DHHC-1:02:YjQzYjMwMGQ3YzJkY2I1MDJlMTliZmUyYzUzNmE2YTI2ZTk2MTU4ZDAwMThhOGQ3aTCp0w==: 00:10:55.185 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:55.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:55.185 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:10:55.185 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.185 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.185 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.185 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:55.186 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:55.186 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:55.186 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:10:55.186 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:55.186 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:55.186 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:55.186 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:55.186 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:55.186 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:55.186 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.186 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.186 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.186 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:55.186 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:55.186 01:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:56.121 00:10:56.121 01:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:56.121 01:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:56.121 01:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:56.379 01:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:56.379 01:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:56.379 01:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.379 01:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.379 01:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.379 01:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:56.379 { 00:10:56.379 "cntlid": 45, 00:10:56.379 "qid": 0, 00:10:56.379 "state": "enabled", 00:10:56.379 "thread": "nvmf_tgt_poll_group_000", 00:10:56.379 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:10:56.379 "listen_address": { 00:10:56.379 "trtype": "TCP", 00:10:56.379 "adrfam": "IPv4", 00:10:56.379 "traddr": "10.0.0.3", 00:10:56.379 "trsvcid": "4420" 00:10:56.379 }, 00:10:56.379 "peer_address": { 00:10:56.379 "trtype": "TCP", 00:10:56.379 "adrfam": "IPv4", 00:10:56.379 "traddr": "10.0.0.1", 00:10:56.379 "trsvcid": "59154" 00:10:56.379 }, 00:10:56.379 "auth": { 00:10:56.379 "state": "completed", 00:10:56.379 "digest": "sha256", 00:10:56.379 "dhgroup": "ffdhe8192" 00:10:56.379 } 00:10:56.379 } 00:10:56.379 ]' 00:10:56.379 01:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:56.379 01:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:56.379 01:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:56.379 01:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:56.379 01:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:56.379 01:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:56.379 01:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:56.379 01:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:56.638 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:01:ZjMyZmI4NWMxNzdkMTQ1OTQzNTExNjQyOTU3YmM4OTYH78Ik: 00:10:56.638 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:01:ZjMyZmI4NWMxNzdkMTQ1OTQzNTExNjQyOTU3YmM4OTYH78Ik: 00:10:57.572 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:57.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:57.572 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:10:57.572 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.572 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.572 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.572 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:57.572 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:57.572 01:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:57.831 01:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:10:57.831 01:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:57.831 01:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:57.831 01:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:57.831 01:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:57.831 01:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:57.831 01:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key3 00:10:57.831 01:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.831 01:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.831 01:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.831 01:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:57.831 01:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:57.831 01:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:58.400 00:10:58.400 01:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:58.400 01:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:58.400 01:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:58.658 01:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:58.658 01:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:58.659 01:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.659 01:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.659 01:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.659 01:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:58.659 { 00:10:58.659 "cntlid": 47, 00:10:58.659 "qid": 0, 00:10:58.659 "state": "enabled", 00:10:58.659 "thread": "nvmf_tgt_poll_group_000", 00:10:58.659 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:10:58.659 "listen_address": { 00:10:58.659 "trtype": "TCP", 00:10:58.659 "adrfam": "IPv4", 00:10:58.659 "traddr": "10.0.0.3", 00:10:58.659 "trsvcid": "4420" 00:10:58.659 }, 00:10:58.659 "peer_address": { 00:10:58.659 "trtype": "TCP", 00:10:58.659 "adrfam": "IPv4", 00:10:58.659 "traddr": "10.0.0.1", 00:10:58.659 "trsvcid": "59166" 00:10:58.659 }, 00:10:58.659 "auth": { 00:10:58.659 "state": "completed", 00:10:58.659 "digest": "sha256", 00:10:58.659 "dhgroup": "ffdhe8192" 00:10:58.659 } 00:10:58.659 } 00:10:58.659 ]' 00:10:58.659 01:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:58.917 01:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:58.917 01:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:58.917 01:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:58.917 01:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:58.917 01:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:58.917 01:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:58.917 01:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:59.176 01:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:10:59.176 01:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:11:00.112 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:00.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:00.112 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:11:00.112 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.112 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.112 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.112 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:00.112 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:00.112 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:00.112 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:00.113 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:00.372 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:11:00.372 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:00.372 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:00.372 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:00.372 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:00.372 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:00.372 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.372 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.372 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.372 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.372 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.372 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.372 01:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.631 00:11:00.631 01:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:00.631 01:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:00.631 01:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:00.890 01:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:00.890 01:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:00.890 01:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.890 01:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.890 01:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.890 01:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:00.890 { 00:11:00.890 "cntlid": 49, 00:11:00.890 "qid": 0, 00:11:00.890 "state": "enabled", 00:11:00.890 "thread": "nvmf_tgt_poll_group_000", 00:11:00.890 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:11:00.890 "listen_address": { 00:11:00.890 "trtype": "TCP", 00:11:00.890 "adrfam": "IPv4", 00:11:00.890 "traddr": "10.0.0.3", 00:11:00.890 "trsvcid": "4420" 00:11:00.890 }, 00:11:00.890 "peer_address": { 00:11:00.890 "trtype": "TCP", 00:11:00.890 "adrfam": "IPv4", 00:11:00.890 "traddr": "10.0.0.1", 00:11:00.890 "trsvcid": "59194" 00:11:00.890 }, 00:11:00.890 "auth": { 00:11:00.890 "state": "completed", 00:11:00.890 "digest": "sha384", 00:11:00.890 "dhgroup": "null" 00:11:00.890 } 00:11:00.890 } 00:11:00.890 ]' 00:11:00.890 01:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:00.890 01:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:00.890 01:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:01.150 01:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:01.150 01:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:01.150 01:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:01.150 01:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:01.150 01:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:01.408 01:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:11:01.408 01:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:11:01.975 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:01.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:01.975 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:11:01.975 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.975 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.234 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.234 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:02.234 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:02.234 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:02.492 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:11:02.492 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:02.492 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:02.492 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:02.492 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:02.493 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:02.493 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:02.493 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.493 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.493 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.493 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:02.493 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:02.493 01:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:02.751 00:11:02.751 01:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:02.751 01:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:02.751 01:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:03.009 01:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:03.009 01:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:03.009 01:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.009 01:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.009 01:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.009 01:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:03.009 { 00:11:03.009 "cntlid": 51, 00:11:03.009 "qid": 0, 00:11:03.009 "state": "enabled", 00:11:03.009 "thread": "nvmf_tgt_poll_group_000", 00:11:03.009 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:11:03.009 "listen_address": { 00:11:03.009 "trtype": "TCP", 00:11:03.009 "adrfam": "IPv4", 00:11:03.009 "traddr": "10.0.0.3", 00:11:03.009 "trsvcid": "4420" 00:11:03.009 }, 00:11:03.009 "peer_address": { 00:11:03.009 "trtype": "TCP", 00:11:03.009 "adrfam": "IPv4", 00:11:03.009 "traddr": "10.0.0.1", 00:11:03.009 "trsvcid": "59220" 00:11:03.009 }, 00:11:03.009 "auth": { 00:11:03.009 "state": "completed", 00:11:03.010 "digest": "sha384", 00:11:03.010 "dhgroup": "null" 00:11:03.010 } 00:11:03.010 } 00:11:03.010 ]' 00:11:03.010 01:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:03.010 01:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:03.010 01:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:03.010 01:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:03.010 01:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:03.268 01:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:03.268 01:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:03.268 01:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:03.535 01:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: --dhchap-ctrl-secret DHHC-1:02:YjQzYjMwMGQ3YzJkY2I1MDJlMTliZmUyYzUzNmE2YTI2ZTk2MTU4ZDAwMThhOGQ3aTCp0w==: 00:11:03.535 01:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: --dhchap-ctrl-secret DHHC-1:02:YjQzYjMwMGQ3YzJkY2I1MDJlMTliZmUyYzUzNmE2YTI2ZTk2MTU4ZDAwMThhOGQ3aTCp0w==: 00:11:04.126 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:04.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:04.126 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:11:04.126 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.126 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.126 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.126 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:04.126 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:04.126 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:04.385 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:11:04.385 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:04.385 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:04.385 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:04.385 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:04.385 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:04.385 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:04.385 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.385 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.385 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.385 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:04.385 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:04.385 01:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:04.951 00:11:04.951 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:04.951 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:04.951 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:05.210 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:05.210 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:05.210 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.210 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.210 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.210 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:05.210 { 00:11:05.210 "cntlid": 53, 00:11:05.210 "qid": 0, 00:11:05.210 "state": "enabled", 00:11:05.210 "thread": "nvmf_tgt_poll_group_000", 00:11:05.210 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:11:05.210 "listen_address": { 00:11:05.210 "trtype": "TCP", 00:11:05.210 "adrfam": "IPv4", 00:11:05.210 "traddr": "10.0.0.3", 00:11:05.210 "trsvcid": "4420" 00:11:05.210 }, 00:11:05.210 "peer_address": { 00:11:05.210 "trtype": "TCP", 00:11:05.210 "adrfam": "IPv4", 00:11:05.210 "traddr": "10.0.0.1", 00:11:05.210 "trsvcid": "59252" 00:11:05.210 }, 00:11:05.210 "auth": { 00:11:05.210 "state": "completed", 00:11:05.210 "digest": "sha384", 00:11:05.210 "dhgroup": "null" 00:11:05.210 } 00:11:05.210 } 00:11:05.210 ]' 00:11:05.210 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:05.210 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:05.210 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:05.210 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:05.210 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:05.210 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:05.210 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:05.210 01:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:05.469 01:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:01:ZjMyZmI4NWMxNzdkMTQ1OTQzNTExNjQyOTU3YmM4OTYH78Ik: 00:11:05.469 01:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:01:ZjMyZmI4NWMxNzdkMTQ1OTQzNTExNjQyOTU3YmM4OTYH78Ik: 00:11:06.403 01:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:06.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:06.403 01:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:11:06.403 01:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.403 01:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.404 01:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.404 01:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:06.404 01:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:06.404 01:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:06.404 01:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:11:06.404 01:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:06.404 01:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:06.404 01:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:06.404 01:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:06.404 01:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:06.404 01:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key3 00:11:06.404 01:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.404 01:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.404 01:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.404 01:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:06.404 01:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:06.404 01:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:06.972 00:11:06.972 01:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:06.972 01:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:06.972 01:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:07.231 01:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:07.231 01:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:07.231 01:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.231 01:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.231 01:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.231 01:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:07.231 { 00:11:07.231 "cntlid": 55, 00:11:07.231 "qid": 0, 00:11:07.231 "state": "enabled", 00:11:07.231 "thread": "nvmf_tgt_poll_group_000", 00:11:07.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:11:07.231 "listen_address": { 00:11:07.231 "trtype": "TCP", 00:11:07.231 "adrfam": "IPv4", 00:11:07.231 "traddr": "10.0.0.3", 00:11:07.231 "trsvcid": "4420" 00:11:07.231 }, 00:11:07.231 "peer_address": { 00:11:07.231 "trtype": "TCP", 00:11:07.231 "adrfam": "IPv4", 00:11:07.231 "traddr": "10.0.0.1", 00:11:07.231 "trsvcid": "39146" 00:11:07.231 }, 00:11:07.231 "auth": { 00:11:07.231 "state": "completed", 00:11:07.231 "digest": "sha384", 00:11:07.231 "dhgroup": "null" 00:11:07.231 } 00:11:07.231 } 00:11:07.231 ]' 00:11:07.231 01:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:07.231 01:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:07.231 01:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:07.231 01:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:07.231 01:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:07.231 01:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:07.231 01:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:07.231 01:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:07.490 01:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:11:07.490 01:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:11:08.426 01:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:08.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:08.426 01:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:11:08.426 01:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.426 01:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.426 01:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.426 01:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:08.426 01:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:08.426 01:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:08.426 01:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:08.426 01:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:11:08.426 01:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:08.426 01:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:08.426 01:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:08.426 01:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:08.426 01:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:08.426 01:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:08.426 01:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.426 01:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.426 01:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.426 01:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:08.426 01:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:08.426 01:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:08.685 00:11:08.944 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:08.945 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:08.945 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:09.203 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:09.203 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:09.203 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.203 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.203 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.203 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:09.203 { 00:11:09.203 "cntlid": 57, 00:11:09.203 "qid": 0, 00:11:09.203 "state": "enabled", 00:11:09.203 "thread": "nvmf_tgt_poll_group_000", 00:11:09.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:11:09.203 "listen_address": { 00:11:09.203 "trtype": "TCP", 00:11:09.203 "adrfam": "IPv4", 00:11:09.203 "traddr": "10.0.0.3", 00:11:09.203 "trsvcid": "4420" 00:11:09.203 }, 00:11:09.203 "peer_address": { 00:11:09.203 "trtype": "TCP", 00:11:09.203 "adrfam": "IPv4", 00:11:09.203 "traddr": "10.0.0.1", 00:11:09.203 "trsvcid": "39178" 00:11:09.203 }, 00:11:09.203 "auth": { 00:11:09.203 "state": "completed", 00:11:09.203 "digest": "sha384", 00:11:09.203 "dhgroup": "ffdhe2048" 00:11:09.203 } 00:11:09.203 } 00:11:09.203 ]' 00:11:09.203 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:09.203 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:09.203 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:09.203 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:09.203 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:09.203 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:09.203 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:09.203 01:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:09.462 01:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:11:09.462 01:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:11:10.397 01:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.397 01:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:11:10.397 01:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.397 01:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.397 01:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.397 01:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:10.397 01:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:10.397 01:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:10.397 01:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:11:10.397 01:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:10.397 01:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:10.397 01:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:10.397 01:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:10.397 01:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.397 01:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:10.397 01:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.397 01:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.397 01:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.397 01:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:10.397 01:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:10.397 01:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:10.965 00:11:10.965 01:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:10.965 01:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:10.965 01:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:10.965 01:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:10.965 01:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:10.965 01:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.965 01:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.965 01:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.224 01:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:11.224 { 00:11:11.224 "cntlid": 59, 00:11:11.224 "qid": 0, 00:11:11.224 "state": "enabled", 00:11:11.224 "thread": "nvmf_tgt_poll_group_000", 00:11:11.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:11:11.224 "listen_address": { 00:11:11.224 "trtype": "TCP", 00:11:11.224 "adrfam": "IPv4", 00:11:11.224 "traddr": "10.0.0.3", 00:11:11.224 "trsvcid": "4420" 00:11:11.224 }, 00:11:11.224 "peer_address": { 00:11:11.224 "trtype": "TCP", 00:11:11.224 "adrfam": "IPv4", 00:11:11.224 "traddr": "10.0.0.1", 00:11:11.224 "trsvcid": "39196" 00:11:11.224 }, 00:11:11.224 "auth": { 00:11:11.224 "state": "completed", 00:11:11.224 "digest": "sha384", 00:11:11.224 "dhgroup": "ffdhe2048" 00:11:11.224 } 00:11:11.224 } 00:11:11.224 ]' 00:11:11.224 01:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:11.224 01:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:11.224 01:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:11.224 01:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:11.224 01:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:11.224 01:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:11.224 01:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:11.224 01:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.483 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: --dhchap-ctrl-secret DHHC-1:02:YjQzYjMwMGQ3YzJkY2I1MDJlMTliZmUyYzUzNmE2YTI2ZTk2MTU4ZDAwMThhOGQ3aTCp0w==: 00:11:11.483 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: --dhchap-ctrl-secret DHHC-1:02:YjQzYjMwMGQ3YzJkY2I1MDJlMTliZmUyYzUzNmE2YTI2ZTk2MTU4ZDAwMThhOGQ3aTCp0w==: 00:11:12.418 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:12.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:12.418 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:11:12.418 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.418 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.418 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.418 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:12.418 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:12.418 01:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:12.677 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:11:12.677 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:12.677 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:12.678 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:12.678 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:12.678 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:12.678 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:12.678 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.678 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.678 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.678 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:12.678 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:12.678 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:12.936 00:11:12.936 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:12.936 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:12.936 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:13.196 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:13.196 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:13.196 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.196 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.196 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.196 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:13.196 { 00:11:13.196 "cntlid": 61, 00:11:13.196 "qid": 0, 00:11:13.196 "state": "enabled", 00:11:13.196 "thread": "nvmf_tgt_poll_group_000", 00:11:13.196 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:11:13.196 "listen_address": { 00:11:13.196 "trtype": "TCP", 00:11:13.196 "adrfam": "IPv4", 00:11:13.196 "traddr": "10.0.0.3", 00:11:13.196 "trsvcid": "4420" 00:11:13.196 }, 00:11:13.196 "peer_address": { 00:11:13.196 "trtype": "TCP", 00:11:13.196 "adrfam": "IPv4", 00:11:13.196 "traddr": "10.0.0.1", 00:11:13.196 "trsvcid": "39216" 00:11:13.196 }, 00:11:13.196 "auth": { 00:11:13.196 "state": "completed", 00:11:13.196 "digest": "sha384", 00:11:13.196 "dhgroup": "ffdhe2048" 00:11:13.196 } 00:11:13.196 } 00:11:13.196 ]' 00:11:13.196 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:13.196 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:13.196 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:13.196 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:13.196 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:13.455 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.455 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.455 01:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.713 01:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:01:ZjMyZmI4NWMxNzdkMTQ1OTQzNTExNjQyOTU3YmM4OTYH78Ik: 00:11:13.713 01:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:01:ZjMyZmI4NWMxNzdkMTQ1OTQzNTExNjQyOTU3YmM4OTYH78Ik: 00:11:14.280 01:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.280 01:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:11:14.280 01:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.280 01:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.280 01:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.280 01:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:14.280 01:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:14.280 01:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:14.538 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:11:14.538 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:14.538 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:14.538 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:14.538 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:14.538 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:14.538 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key3 00:11:14.538 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.538 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.538 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.538 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:14.538 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:14.538 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:15.104 00:11:15.104 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:15.104 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.104 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:15.364 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:15.364 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:15.364 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.364 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.364 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.364 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:15.364 { 00:11:15.364 "cntlid": 63, 00:11:15.364 "qid": 0, 00:11:15.364 "state": "enabled", 00:11:15.364 "thread": "nvmf_tgt_poll_group_000", 00:11:15.364 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:11:15.364 "listen_address": { 00:11:15.364 "trtype": "TCP", 00:11:15.364 "adrfam": "IPv4", 00:11:15.364 "traddr": "10.0.0.3", 00:11:15.364 "trsvcid": "4420" 00:11:15.364 }, 00:11:15.364 "peer_address": { 00:11:15.364 "trtype": "TCP", 00:11:15.364 "adrfam": "IPv4", 00:11:15.364 "traddr": "10.0.0.1", 00:11:15.364 "trsvcid": "39244" 00:11:15.364 }, 00:11:15.364 "auth": { 00:11:15.364 "state": "completed", 00:11:15.364 "digest": "sha384", 00:11:15.364 "dhgroup": "ffdhe2048" 00:11:15.364 } 00:11:15.364 } 00:11:15.364 ]' 00:11:15.364 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:15.364 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:15.364 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:15.364 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:15.364 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:15.364 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:15.364 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:15.364 01:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:15.624 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:11:15.624 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:11:16.561 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.561 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:11:16.561 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.561 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.561 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.561 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:16.561 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:16.561 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:16.561 01:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:16.561 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:11:16.561 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:16.561 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:16.561 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:16.561 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:16.561 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:16.561 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:16.561 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.561 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.561 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.561 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:16.561 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:16.561 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:17.129 00:11:17.129 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:17.129 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.129 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:17.388 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.388 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.388 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.388 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.388 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.388 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:17.388 { 00:11:17.388 "cntlid": 65, 00:11:17.388 "qid": 0, 00:11:17.388 "state": "enabled", 00:11:17.388 "thread": "nvmf_tgt_poll_group_000", 00:11:17.388 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:11:17.388 "listen_address": { 00:11:17.388 "trtype": "TCP", 00:11:17.388 "adrfam": "IPv4", 00:11:17.388 "traddr": "10.0.0.3", 00:11:17.388 "trsvcid": "4420" 00:11:17.388 }, 00:11:17.388 "peer_address": { 00:11:17.388 "trtype": "TCP", 00:11:17.388 "adrfam": "IPv4", 00:11:17.388 "traddr": "10.0.0.1", 00:11:17.388 "trsvcid": "46118" 00:11:17.388 }, 00:11:17.388 "auth": { 00:11:17.388 "state": "completed", 00:11:17.388 "digest": "sha384", 00:11:17.388 "dhgroup": "ffdhe3072" 00:11:17.388 } 00:11:17.388 } 00:11:17.388 ]' 00:11:17.388 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:17.388 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:17.388 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:17.388 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:17.388 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:17.388 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:17.388 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:17.388 01:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:17.956 01:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:11:17.956 01:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:11:18.524 01:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:18.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:18.524 01:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:11:18.524 01:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.524 01:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.524 01:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.524 01:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:18.524 01:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:18.524 01:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:18.782 01:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:11:18.782 01:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:18.782 01:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:18.782 01:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:18.782 01:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:18.782 01:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.782 01:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:18.782 01:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.782 01:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.782 01:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.782 01:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:18.782 01:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:18.782 01:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:19.039 00:11:19.039 01:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:19.039 01:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:19.039 01:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.297 01:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:19.297 01:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:19.297 01:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.297 01:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.297 01:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.297 01:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:19.297 { 00:11:19.297 "cntlid": 67, 00:11:19.297 "qid": 0, 00:11:19.297 "state": "enabled", 00:11:19.297 "thread": "nvmf_tgt_poll_group_000", 00:11:19.297 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:11:19.297 "listen_address": { 00:11:19.297 "trtype": "TCP", 00:11:19.297 "adrfam": "IPv4", 00:11:19.297 "traddr": "10.0.0.3", 00:11:19.297 "trsvcid": "4420" 00:11:19.297 }, 00:11:19.297 "peer_address": { 00:11:19.297 "trtype": "TCP", 00:11:19.297 "adrfam": "IPv4", 00:11:19.297 "traddr": "10.0.0.1", 00:11:19.297 "trsvcid": "46144" 00:11:19.297 }, 00:11:19.297 "auth": { 00:11:19.297 "state": "completed", 00:11:19.297 "digest": "sha384", 00:11:19.297 "dhgroup": "ffdhe3072" 00:11:19.297 } 00:11:19.297 } 00:11:19.297 ]' 00:11:19.297 01:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:19.297 01:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:19.297 01:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:19.297 01:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:19.297 01:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:19.557 01:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:19.557 01:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:19.557 01:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.855 01:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: --dhchap-ctrl-secret DHHC-1:02:YjQzYjMwMGQ3YzJkY2I1MDJlMTliZmUyYzUzNmE2YTI2ZTk2MTU4ZDAwMThhOGQ3aTCp0w==: 00:11:19.855 01:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: --dhchap-ctrl-secret DHHC-1:02:YjQzYjMwMGQ3YzJkY2I1MDJlMTliZmUyYzUzNmE2YTI2ZTk2MTU4ZDAwMThhOGQ3aTCp0w==: 00:11:20.447 01:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.447 01:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:11:20.447 01:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.447 01:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.447 01:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.447 01:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:20.447 01:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:20.447 01:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:20.706 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:11:20.706 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:20.706 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:20.706 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:20.706 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:20.706 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.706 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:20.706 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.706 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.706 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.706 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:20.706 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:20.706 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:21.273 00:11:21.273 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:21.273 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.273 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:21.532 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.532 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.532 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.532 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.532 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.532 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:21.532 { 00:11:21.532 "cntlid": 69, 00:11:21.532 "qid": 0, 00:11:21.532 "state": "enabled", 00:11:21.532 "thread": "nvmf_tgt_poll_group_000", 00:11:21.532 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:11:21.532 "listen_address": { 00:11:21.532 "trtype": "TCP", 00:11:21.532 "adrfam": "IPv4", 00:11:21.532 "traddr": "10.0.0.3", 00:11:21.532 "trsvcid": "4420" 00:11:21.532 }, 00:11:21.532 "peer_address": { 00:11:21.532 "trtype": "TCP", 00:11:21.532 "adrfam": "IPv4", 00:11:21.532 "traddr": "10.0.0.1", 00:11:21.532 "trsvcid": "46164" 00:11:21.532 }, 00:11:21.532 "auth": { 00:11:21.532 "state": "completed", 00:11:21.532 "digest": "sha384", 00:11:21.532 "dhgroup": "ffdhe3072" 00:11:21.532 } 00:11:21.532 } 00:11:21.532 ]' 00:11:21.532 01:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:21.532 01:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:21.532 01:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:21.532 01:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:21.533 01:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:21.533 01:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.533 01:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.533 01:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:21.792 01:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:01:ZjMyZmI4NWMxNzdkMTQ1OTQzNTExNjQyOTU3YmM4OTYH78Ik: 00:11:21.792 01:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:01:ZjMyZmI4NWMxNzdkMTQ1OTQzNTExNjQyOTU3YmM4OTYH78Ik: 00:11:22.728 01:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.728 01:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:11:22.728 01:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.728 01:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.728 01:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.728 01:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:22.728 01:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:22.728 01:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:22.987 01:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:11:22.987 01:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:22.987 01:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:22.987 01:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:22.987 01:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:22.987 01:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.987 01:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key3 00:11:22.987 01:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.987 01:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.987 01:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.987 01:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:22.987 01:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:22.987 01:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:23.245 00:11:23.245 01:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:23.245 01:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:23.245 01:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.504 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.504 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.504 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.504 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.504 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.504 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:23.504 { 00:11:23.504 "cntlid": 71, 00:11:23.504 "qid": 0, 00:11:23.504 "state": "enabled", 00:11:23.504 "thread": "nvmf_tgt_poll_group_000", 00:11:23.504 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:11:23.504 "listen_address": { 00:11:23.505 "trtype": "TCP", 00:11:23.505 "adrfam": "IPv4", 00:11:23.505 "traddr": "10.0.0.3", 00:11:23.505 "trsvcid": "4420" 00:11:23.505 }, 00:11:23.505 "peer_address": { 00:11:23.505 "trtype": "TCP", 00:11:23.505 "adrfam": "IPv4", 00:11:23.505 "traddr": "10.0.0.1", 00:11:23.505 "trsvcid": "46178" 00:11:23.505 }, 00:11:23.505 "auth": { 00:11:23.505 "state": "completed", 00:11:23.505 "digest": "sha384", 00:11:23.505 "dhgroup": "ffdhe3072" 00:11:23.505 } 00:11:23.505 } 00:11:23.505 ]' 00:11:23.505 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:23.764 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:23.764 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:23.764 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:23.764 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:23.764 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.764 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.764 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.023 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:11:24.023 01:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:11:24.590 01:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.590 01:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:11:24.590 01:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.590 01:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.849 01:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.850 01:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:24.850 01:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:24.850 01:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:24.850 01:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:24.850 01:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:11:24.850 01:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:24.850 01:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:24.850 01:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:24.850 01:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:24.850 01:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:24.850 01:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:24.850 01:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.850 01:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.109 01:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.109 01:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:25.109 01:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:25.109 01:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:25.368 00:11:25.368 01:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:25.368 01:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:25.368 01:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:25.627 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:25.627 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:25.627 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.627 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.627 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.627 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:25.627 { 00:11:25.627 "cntlid": 73, 00:11:25.627 "qid": 0, 00:11:25.627 "state": "enabled", 00:11:25.627 "thread": "nvmf_tgt_poll_group_000", 00:11:25.627 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:11:25.627 "listen_address": { 00:11:25.627 "trtype": "TCP", 00:11:25.627 "adrfam": "IPv4", 00:11:25.627 "traddr": "10.0.0.3", 00:11:25.627 "trsvcid": "4420" 00:11:25.627 }, 00:11:25.627 "peer_address": { 00:11:25.627 "trtype": "TCP", 00:11:25.627 "adrfam": "IPv4", 00:11:25.627 "traddr": "10.0.0.1", 00:11:25.627 "trsvcid": "46190" 00:11:25.627 }, 00:11:25.627 "auth": { 00:11:25.627 "state": "completed", 00:11:25.627 "digest": "sha384", 00:11:25.627 "dhgroup": "ffdhe4096" 00:11:25.627 } 00:11:25.627 } 00:11:25.627 ]' 00:11:25.627 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:25.627 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:25.627 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:25.627 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:25.627 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:25.627 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:25.627 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:25.627 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:25.887 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:11:25.887 01:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:11:26.824 01:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.824 01:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:11:26.824 01:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.824 01:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.824 01:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.824 01:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:26.824 01:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:26.824 01:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:27.083 01:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:11:27.083 01:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:27.083 01:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:27.083 01:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:27.083 01:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:27.083 01:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:27.083 01:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:27.083 01:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.083 01:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.083 01:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.083 01:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:27.083 01:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:27.083 01:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:27.342 00:11:27.342 01:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:27.342 01:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.342 01:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:27.600 01:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.600 01:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.600 01:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.600 01:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.601 01:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.601 01:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:27.601 { 00:11:27.601 "cntlid": 75, 00:11:27.601 "qid": 0, 00:11:27.601 "state": "enabled", 00:11:27.601 "thread": "nvmf_tgt_poll_group_000", 00:11:27.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:11:27.601 "listen_address": { 00:11:27.601 "trtype": "TCP", 00:11:27.601 "adrfam": "IPv4", 00:11:27.601 "traddr": "10.0.0.3", 00:11:27.601 "trsvcid": "4420" 00:11:27.601 }, 00:11:27.601 "peer_address": { 00:11:27.601 "trtype": "TCP", 00:11:27.601 "adrfam": "IPv4", 00:11:27.601 "traddr": "10.0.0.1", 00:11:27.601 "trsvcid": "42096" 00:11:27.601 }, 00:11:27.601 "auth": { 00:11:27.601 "state": "completed", 00:11:27.601 "digest": "sha384", 00:11:27.601 "dhgroup": "ffdhe4096" 00:11:27.601 } 00:11:27.601 } 00:11:27.601 ]' 00:11:27.601 01:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:27.601 01:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:27.601 01:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:27.860 01:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:27.860 01:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:27.860 01:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.860 01:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.860 01:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.118 01:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: --dhchap-ctrl-secret DHHC-1:02:YjQzYjMwMGQ3YzJkY2I1MDJlMTliZmUyYzUzNmE2YTI2ZTk2MTU4ZDAwMThhOGQ3aTCp0w==: 00:11:28.118 01:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: --dhchap-ctrl-secret DHHC-1:02:YjQzYjMwMGQ3YzJkY2I1MDJlMTliZmUyYzUzNmE2YTI2ZTk2MTU4ZDAwMThhOGQ3aTCp0w==: 00:11:28.698 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:28.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:28.698 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:11:28.698 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.698 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.698 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.698 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:28.698 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:28.698 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:28.960 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:11:28.960 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:28.960 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:28.960 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:28.960 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:28.960 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:28.960 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:28.960 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.960 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.960 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.960 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:28.960 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:28.960 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:29.527 00:11:29.527 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:29.527 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:29.527 01:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.785 01:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.785 01:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.785 01:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.785 01:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.785 01:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.785 01:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:29.785 { 00:11:29.785 "cntlid": 77, 00:11:29.785 "qid": 0, 00:11:29.785 "state": "enabled", 00:11:29.785 "thread": "nvmf_tgt_poll_group_000", 00:11:29.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:11:29.785 "listen_address": { 00:11:29.785 "trtype": "TCP", 00:11:29.785 "adrfam": "IPv4", 00:11:29.785 "traddr": "10.0.0.3", 00:11:29.785 "trsvcid": "4420" 00:11:29.785 }, 00:11:29.785 "peer_address": { 00:11:29.785 "trtype": "TCP", 00:11:29.785 "adrfam": "IPv4", 00:11:29.785 "traddr": "10.0.0.1", 00:11:29.785 "trsvcid": "42120" 00:11:29.785 }, 00:11:29.785 "auth": { 00:11:29.785 "state": "completed", 00:11:29.785 "digest": "sha384", 00:11:29.785 "dhgroup": "ffdhe4096" 00:11:29.785 } 00:11:29.785 } 00:11:29.785 ]' 00:11:29.785 01:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:29.785 01:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:29.785 01:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:29.785 01:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:29.785 01:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:29.785 01:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.785 01:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.785 01:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.044 01:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:01:ZjMyZmI4NWMxNzdkMTQ1OTQzNTExNjQyOTU3YmM4OTYH78Ik: 00:11:30.044 01:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:01:ZjMyZmI4NWMxNzdkMTQ1OTQzNTExNjQyOTU3YmM4OTYH78Ik: 00:11:30.981 01:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.982 01:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:11:30.982 01:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.982 01:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.982 01:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.982 01:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:30.982 01:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:30.982 01:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:30.982 01:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:11:30.982 01:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:30.982 01:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:30.982 01:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:30.982 01:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:31.241 01:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:31.241 01:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key3 00:11:31.241 01:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.241 01:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.241 01:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.241 01:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:31.241 01:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:31.241 01:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:31.499 00:11:31.499 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:31.499 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:31.499 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.758 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.758 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:31.758 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.758 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.758 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.758 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:31.758 { 00:11:31.758 "cntlid": 79, 00:11:31.758 "qid": 0, 00:11:31.758 "state": "enabled", 00:11:31.758 "thread": "nvmf_tgt_poll_group_000", 00:11:31.758 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:11:31.758 "listen_address": { 00:11:31.758 "trtype": "TCP", 00:11:31.758 "adrfam": "IPv4", 00:11:31.758 "traddr": "10.0.0.3", 00:11:31.758 "trsvcid": "4420" 00:11:31.758 }, 00:11:31.758 "peer_address": { 00:11:31.758 "trtype": "TCP", 00:11:31.758 "adrfam": "IPv4", 00:11:31.758 "traddr": "10.0.0.1", 00:11:31.758 "trsvcid": "42154" 00:11:31.758 }, 00:11:31.758 "auth": { 00:11:31.758 "state": "completed", 00:11:31.758 "digest": "sha384", 00:11:31.758 "dhgroup": "ffdhe4096" 00:11:31.758 } 00:11:31.758 } 00:11:31.758 ]' 00:11:31.758 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:31.758 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:31.758 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:31.758 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:31.758 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:32.018 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:32.018 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:32.018 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.276 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:11:32.276 01:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:11:32.844 01:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.844 01:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:11:32.844 01:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.844 01:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.844 01:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.844 01:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:32.844 01:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:32.844 01:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:32.844 01:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:33.103 01:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:11:33.103 01:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:33.103 01:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:33.103 01:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:33.103 01:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:33.103 01:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.103 01:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:33.103 01:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.103 01:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.103 01:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.103 01:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:33.103 01:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:33.103 01:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:33.671 00:11:33.671 01:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:33.671 01:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:33.671 01:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.934 01:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.934 01:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.934 01:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.934 01:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.934 01:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.934 01:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:33.934 { 00:11:33.934 "cntlid": 81, 00:11:33.934 "qid": 0, 00:11:33.934 "state": "enabled", 00:11:33.934 "thread": "nvmf_tgt_poll_group_000", 00:11:33.934 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:11:33.934 "listen_address": { 00:11:33.934 "trtype": "TCP", 00:11:33.934 "adrfam": "IPv4", 00:11:33.934 "traddr": "10.0.0.3", 00:11:33.934 "trsvcid": "4420" 00:11:33.934 }, 00:11:33.934 "peer_address": { 00:11:33.934 "trtype": "TCP", 00:11:33.934 "adrfam": "IPv4", 00:11:33.934 "traddr": "10.0.0.1", 00:11:33.934 "trsvcid": "42186" 00:11:33.934 }, 00:11:33.934 "auth": { 00:11:33.934 "state": "completed", 00:11:33.934 "digest": "sha384", 00:11:33.934 "dhgroup": "ffdhe6144" 00:11:33.934 } 00:11:33.934 } 00:11:33.934 ]' 00:11:33.934 01:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:33.934 01:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:33.934 01:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:33.934 01:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:33.934 01:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:34.193 01:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:34.193 01:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:34.193 01:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:34.452 01:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:11:34.452 01:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:11:35.020 01:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.020 01:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:11:35.020 01:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.020 01:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.020 01:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.020 01:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:35.020 01:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:35.020 01:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:35.279 01:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:11:35.279 01:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:35.279 01:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:35.279 01:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:35.279 01:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:35.279 01:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:35.279 01:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:35.279 01:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.279 01:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.279 01:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.279 01:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:35.279 01:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:35.279 01:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:35.847 00:11:35.847 01:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:35.847 01:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:35.847 01:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:36.106 01:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.106 01:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.106 01:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.106 01:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.106 01:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.106 01:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:36.106 { 00:11:36.106 "cntlid": 83, 00:11:36.107 "qid": 0, 00:11:36.107 "state": "enabled", 00:11:36.107 "thread": "nvmf_tgt_poll_group_000", 00:11:36.107 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:11:36.107 "listen_address": { 00:11:36.107 "trtype": "TCP", 00:11:36.107 "adrfam": "IPv4", 00:11:36.107 "traddr": "10.0.0.3", 00:11:36.107 "trsvcid": "4420" 00:11:36.107 }, 00:11:36.107 "peer_address": { 00:11:36.107 "trtype": "TCP", 00:11:36.107 "adrfam": "IPv4", 00:11:36.107 "traddr": "10.0.0.1", 00:11:36.107 "trsvcid": "57568" 00:11:36.107 }, 00:11:36.107 "auth": { 00:11:36.107 "state": "completed", 00:11:36.107 "digest": "sha384", 00:11:36.107 "dhgroup": "ffdhe6144" 00:11:36.107 } 00:11:36.107 } 00:11:36.107 ]' 00:11:36.107 01:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:36.107 01:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:36.107 01:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:36.107 01:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:36.107 01:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:36.107 01:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:36.107 01:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:36.107 01:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:36.674 01:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: --dhchap-ctrl-secret DHHC-1:02:YjQzYjMwMGQ3YzJkY2I1MDJlMTliZmUyYzUzNmE2YTI2ZTk2MTU4ZDAwMThhOGQ3aTCp0w==: 00:11:36.674 01:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: --dhchap-ctrl-secret DHHC-1:02:YjQzYjMwMGQ3YzJkY2I1MDJlMTliZmUyYzUzNmE2YTI2ZTk2MTU4ZDAwMThhOGQ3aTCp0w==: 00:11:37.242 01:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:37.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:37.242 01:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:11:37.242 01:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.242 01:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.242 01:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.242 01:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:37.242 01:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:37.242 01:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:37.242 01:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:11:37.501 01:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:37.501 01:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:37.501 01:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:37.501 01:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:37.501 01:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.501 01:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:37.501 01:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.501 01:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.501 01:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.501 01:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:37.501 01:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:37.501 01:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.068 00:11:38.068 01:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:38.068 01:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:38.068 01:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.327 01:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.327 01:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:38.327 01:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.327 01:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.327 01:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.328 01:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:38.328 { 00:11:38.328 "cntlid": 85, 00:11:38.328 "qid": 0, 00:11:38.328 "state": "enabled", 00:11:38.328 "thread": "nvmf_tgt_poll_group_000", 00:11:38.328 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:11:38.328 "listen_address": { 00:11:38.328 "trtype": "TCP", 00:11:38.328 "adrfam": "IPv4", 00:11:38.328 "traddr": "10.0.0.3", 00:11:38.328 "trsvcid": "4420" 00:11:38.328 }, 00:11:38.328 "peer_address": { 00:11:38.328 "trtype": "TCP", 00:11:38.328 "adrfam": "IPv4", 00:11:38.328 "traddr": "10.0.0.1", 00:11:38.328 "trsvcid": "57598" 00:11:38.328 }, 00:11:38.328 "auth": { 00:11:38.328 "state": "completed", 00:11:38.328 "digest": "sha384", 00:11:38.328 "dhgroup": "ffdhe6144" 00:11:38.328 } 00:11:38.328 } 00:11:38.328 ]' 00:11:38.328 01:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:38.328 01:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:38.328 01:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:38.328 01:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:38.328 01:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:38.328 01:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.328 01:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.328 01:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.587 01:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:01:ZjMyZmI4NWMxNzdkMTQ1OTQzNTExNjQyOTU3YmM4OTYH78Ik: 00:11:38.587 01:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:01:ZjMyZmI4NWMxNzdkMTQ1OTQzNTExNjQyOTU3YmM4OTYH78Ik: 00:11:39.154 01:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.154 01:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:11:39.154 01:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.154 01:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.154 01:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.154 01:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:39.154 01:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:39.154 01:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:39.413 01:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:11:39.414 01:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:39.414 01:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:39.414 01:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:39.414 01:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:39.414 01:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:39.414 01:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key3 00:11:39.414 01:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.414 01:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.414 01:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.414 01:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:39.414 01:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:39.414 01:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:39.981 00:11:39.981 01:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:39.981 01:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:39.981 01:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.241 01:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.241 01:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.241 01:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.241 01:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.241 01:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.241 01:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:40.241 { 00:11:40.241 "cntlid": 87, 00:11:40.241 "qid": 0, 00:11:40.241 "state": "enabled", 00:11:40.241 "thread": "nvmf_tgt_poll_group_000", 00:11:40.241 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:11:40.241 "listen_address": { 00:11:40.241 "trtype": "TCP", 00:11:40.241 "adrfam": "IPv4", 00:11:40.241 "traddr": "10.0.0.3", 00:11:40.241 "trsvcid": "4420" 00:11:40.241 }, 00:11:40.241 "peer_address": { 00:11:40.241 "trtype": "TCP", 00:11:40.241 "adrfam": "IPv4", 00:11:40.241 "traddr": "10.0.0.1", 00:11:40.241 "trsvcid": "57626" 00:11:40.241 }, 00:11:40.241 "auth": { 00:11:40.241 "state": "completed", 00:11:40.241 "digest": "sha384", 00:11:40.241 "dhgroup": "ffdhe6144" 00:11:40.241 } 00:11:40.241 } 00:11:40.241 ]' 00:11:40.241 01:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:40.241 01:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:40.241 01:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:40.241 01:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:40.241 01:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:40.500 01:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:40.500 01:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:40.500 01:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:40.758 01:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:11:40.758 01:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:11:41.325 01:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.325 01:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:11:41.325 01:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.325 01:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.325 01:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.325 01:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:41.325 01:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:41.325 01:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:41.325 01:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:41.584 01:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:11:41.584 01:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:41.584 01:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:41.584 01:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:41.584 01:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:41.584 01:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:41.584 01:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:41.584 01:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.584 01:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.584 01:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.584 01:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:41.584 01:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:41.584 01:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:42.152 00:11:42.152 01:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:42.152 01:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:42.152 01:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.411 01:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.411 01:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.411 01:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.411 01:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.411 01:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.411 01:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:42.411 { 00:11:42.411 "cntlid": 89, 00:11:42.411 "qid": 0, 00:11:42.411 "state": "enabled", 00:11:42.411 "thread": "nvmf_tgt_poll_group_000", 00:11:42.411 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:11:42.411 "listen_address": { 00:11:42.411 "trtype": "TCP", 00:11:42.411 "adrfam": "IPv4", 00:11:42.411 "traddr": "10.0.0.3", 00:11:42.411 "trsvcid": "4420" 00:11:42.411 }, 00:11:42.411 "peer_address": { 00:11:42.411 "trtype": "TCP", 00:11:42.411 "adrfam": "IPv4", 00:11:42.411 "traddr": "10.0.0.1", 00:11:42.411 "trsvcid": "57660" 00:11:42.411 }, 00:11:42.411 "auth": { 00:11:42.411 "state": "completed", 00:11:42.411 "digest": "sha384", 00:11:42.412 "dhgroup": "ffdhe8192" 00:11:42.412 } 00:11:42.412 } 00:11:42.412 ]' 00:11:42.412 01:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:42.412 01:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:42.412 01:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:42.412 01:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:42.412 01:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:42.412 01:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:42.412 01:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.412 01:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.981 01:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:11:42.981 01:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:11:43.549 01:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.549 01:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:11:43.549 01:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.549 01:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.549 01:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.549 01:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:43.549 01:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:43.549 01:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:43.808 01:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:11:43.808 01:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:43.808 01:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:43.808 01:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:43.808 01:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:43.808 01:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.808 01:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:43.808 01:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.808 01:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.808 01:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.808 01:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:43.808 01:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:43.808 01:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.376 00:11:44.376 01:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:44.376 01:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.376 01:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:44.636 01:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.636 01:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.636 01:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.636 01:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.636 01:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.636 01:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:44.636 { 00:11:44.636 "cntlid": 91, 00:11:44.636 "qid": 0, 00:11:44.636 "state": "enabled", 00:11:44.636 "thread": "nvmf_tgt_poll_group_000", 00:11:44.636 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:11:44.636 "listen_address": { 00:11:44.636 "trtype": "TCP", 00:11:44.636 "adrfam": "IPv4", 00:11:44.636 "traddr": "10.0.0.3", 00:11:44.636 "trsvcid": "4420" 00:11:44.636 }, 00:11:44.636 "peer_address": { 00:11:44.636 "trtype": "TCP", 00:11:44.636 "adrfam": "IPv4", 00:11:44.636 "traddr": "10.0.0.1", 00:11:44.636 "trsvcid": "57692" 00:11:44.636 }, 00:11:44.636 "auth": { 00:11:44.636 "state": "completed", 00:11:44.636 "digest": "sha384", 00:11:44.636 "dhgroup": "ffdhe8192" 00:11:44.636 } 00:11:44.636 } 00:11:44.636 ]' 00:11:44.636 01:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:44.895 01:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:44.895 01:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:44.895 01:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:44.895 01:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:44.895 01:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.895 01:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.895 01:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.155 01:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: --dhchap-ctrl-secret DHHC-1:02:YjQzYjMwMGQ3YzJkY2I1MDJlMTliZmUyYzUzNmE2YTI2ZTk2MTU4ZDAwMThhOGQ3aTCp0w==: 00:11:45.155 01:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: --dhchap-ctrl-secret DHHC-1:02:YjQzYjMwMGQ3YzJkY2I1MDJlMTliZmUyYzUzNmE2YTI2ZTk2MTU4ZDAwMThhOGQ3aTCp0w==: 00:11:45.723 01:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.723 01:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:11:45.723 01:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.723 01:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.723 01:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.723 01:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:45.723 01:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:45.723 01:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:45.983 01:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:11:45.983 01:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:45.983 01:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:45.983 01:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:45.983 01:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:45.983 01:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.983 01:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.983 01:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.983 01:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.983 01:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.983 01:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.983 01:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.983 01:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:46.551 00:11:46.822 01:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:46.822 01:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:46.822 01:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.822 01:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.822 01:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.822 01:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.822 01:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.096 01:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.096 01:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:47.096 { 00:11:47.096 "cntlid": 93, 00:11:47.096 "qid": 0, 00:11:47.096 "state": "enabled", 00:11:47.096 "thread": "nvmf_tgt_poll_group_000", 00:11:47.096 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:11:47.096 "listen_address": { 00:11:47.096 "trtype": "TCP", 00:11:47.096 "adrfam": "IPv4", 00:11:47.096 "traddr": "10.0.0.3", 00:11:47.096 "trsvcid": "4420" 00:11:47.096 }, 00:11:47.096 "peer_address": { 00:11:47.096 "trtype": "TCP", 00:11:47.096 "adrfam": "IPv4", 00:11:47.096 "traddr": "10.0.0.1", 00:11:47.096 "trsvcid": "51146" 00:11:47.096 }, 00:11:47.096 "auth": { 00:11:47.096 "state": "completed", 00:11:47.096 "digest": "sha384", 00:11:47.096 "dhgroup": "ffdhe8192" 00:11:47.096 } 00:11:47.096 } 00:11:47.096 ]' 00:11:47.096 01:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:47.096 01:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:47.096 01:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:47.096 01:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:47.096 01:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:47.096 01:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.096 01:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.096 01:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:47.355 01:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:01:ZjMyZmI4NWMxNzdkMTQ1OTQzNTExNjQyOTU3YmM4OTYH78Ik: 00:11:47.355 01:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:01:ZjMyZmI4NWMxNzdkMTQ1OTQzNTExNjQyOTU3YmM4OTYH78Ik: 00:11:47.924 01:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:47.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:47.924 01:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:11:47.924 01:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.924 01:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.924 01:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.924 01:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:47.924 01:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:47.924 01:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:48.183 01:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:11:48.183 01:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:48.183 01:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:48.183 01:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:48.183 01:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:48.183 01:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.183 01:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key3 00:11:48.183 01:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.183 01:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.183 01:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.183 01:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:48.183 01:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:48.183 01:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:48.760 00:11:48.760 01:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:48.760 01:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:48.760 01:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.020 01:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.020 01:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.020 01:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.020 01:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.020 01:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.020 01:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:49.020 { 00:11:49.020 "cntlid": 95, 00:11:49.020 "qid": 0, 00:11:49.020 "state": "enabled", 00:11:49.020 "thread": "nvmf_tgt_poll_group_000", 00:11:49.020 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:11:49.020 "listen_address": { 00:11:49.020 "trtype": "TCP", 00:11:49.020 "adrfam": "IPv4", 00:11:49.020 "traddr": "10.0.0.3", 00:11:49.020 "trsvcid": "4420" 00:11:49.020 }, 00:11:49.020 "peer_address": { 00:11:49.020 "trtype": "TCP", 00:11:49.020 "adrfam": "IPv4", 00:11:49.020 "traddr": "10.0.0.1", 00:11:49.020 "trsvcid": "51172" 00:11:49.020 }, 00:11:49.020 "auth": { 00:11:49.021 "state": "completed", 00:11:49.021 "digest": "sha384", 00:11:49.021 "dhgroup": "ffdhe8192" 00:11:49.021 } 00:11:49.021 } 00:11:49.021 ]' 00:11:49.021 01:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:49.280 01:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:49.280 01:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:49.280 01:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:49.280 01:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:49.280 01:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.280 01:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.280 01:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.539 01:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:11:49.539 01:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:11:50.107 01:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.107 01:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:11:50.107 01:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.107 01:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.107 01:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.107 01:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:50.107 01:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:50.107 01:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:50.107 01:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:50.107 01:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:50.366 01:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:11:50.367 01:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:50.367 01:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:50.367 01:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:50.367 01:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:50.367 01:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.367 01:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.367 01:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.367 01:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.367 01:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.367 01:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.367 01:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.367 01:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.934 00:11:50.934 01:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:50.934 01:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:50.934 01:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.194 01:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.194 01:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.194 01:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.194 01:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.194 01:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.194 01:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:51.194 { 00:11:51.194 "cntlid": 97, 00:11:51.194 "qid": 0, 00:11:51.194 "state": "enabled", 00:11:51.194 "thread": "nvmf_tgt_poll_group_000", 00:11:51.194 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:11:51.194 "listen_address": { 00:11:51.194 "trtype": "TCP", 00:11:51.194 "adrfam": "IPv4", 00:11:51.194 "traddr": "10.0.0.3", 00:11:51.194 "trsvcid": "4420" 00:11:51.194 }, 00:11:51.194 "peer_address": { 00:11:51.194 "trtype": "TCP", 00:11:51.194 "adrfam": "IPv4", 00:11:51.194 "traddr": "10.0.0.1", 00:11:51.194 "trsvcid": "51196" 00:11:51.194 }, 00:11:51.194 "auth": { 00:11:51.194 "state": "completed", 00:11:51.194 "digest": "sha512", 00:11:51.194 "dhgroup": "null" 00:11:51.194 } 00:11:51.194 } 00:11:51.194 ]' 00:11:51.194 01:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:51.194 01:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:51.194 01:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:51.194 01:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:51.194 01:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:51.194 01:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.194 01:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.194 01:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.454 01:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:11:51.454 01:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:11:52.391 01:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.391 01:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:11:52.391 01:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.391 01:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.391 01:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.391 01:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:52.391 01:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:52.391 01:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:52.391 01:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:11:52.391 01:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:52.391 01:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:52.391 01:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:52.391 01:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:52.391 01:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.391 01:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.391 01:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.391 01:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.391 01:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.391 01:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.391 01:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.391 01:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.959 00:11:52.959 01:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:52.959 01:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.959 01:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:53.218 01:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.218 01:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.218 01:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.218 01:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.218 01:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.218 01:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:53.218 { 00:11:53.218 "cntlid": 99, 00:11:53.218 "qid": 0, 00:11:53.218 "state": "enabled", 00:11:53.218 "thread": "nvmf_tgt_poll_group_000", 00:11:53.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:11:53.218 "listen_address": { 00:11:53.218 "trtype": "TCP", 00:11:53.218 "adrfam": "IPv4", 00:11:53.218 "traddr": "10.0.0.3", 00:11:53.218 "trsvcid": "4420" 00:11:53.218 }, 00:11:53.218 "peer_address": { 00:11:53.218 "trtype": "TCP", 00:11:53.218 "adrfam": "IPv4", 00:11:53.218 "traddr": "10.0.0.1", 00:11:53.218 "trsvcid": "51240" 00:11:53.218 }, 00:11:53.218 "auth": { 00:11:53.218 "state": "completed", 00:11:53.218 "digest": "sha512", 00:11:53.218 "dhgroup": "null" 00:11:53.218 } 00:11:53.218 } 00:11:53.218 ]' 00:11:53.218 01:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:53.218 01:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:53.218 01:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:53.218 01:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:53.218 01:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:53.218 01:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.218 01:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.218 01:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.477 01:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: --dhchap-ctrl-secret DHHC-1:02:YjQzYjMwMGQ3YzJkY2I1MDJlMTliZmUyYzUzNmE2YTI2ZTk2MTU4ZDAwMThhOGQ3aTCp0w==: 00:11:53.477 01:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: --dhchap-ctrl-secret DHHC-1:02:YjQzYjMwMGQ3YzJkY2I1MDJlMTliZmUyYzUzNmE2YTI2ZTk2MTU4ZDAwMThhOGQ3aTCp0w==: 00:11:54.045 01:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.045 01:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:11:54.045 01:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.045 01:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.045 01:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.045 01:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:54.045 01:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:54.045 01:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:54.304 01:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:11:54.304 01:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:54.304 01:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:54.305 01:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:54.305 01:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:54.305 01:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.305 01:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.305 01:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.305 01:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.305 01:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.305 01:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.305 01:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.305 01:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.563 00:11:54.822 01:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:54.822 01:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:54.822 01:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.082 01:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.082 01:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.082 01:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.082 01:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.082 01:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.082 01:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:55.082 { 00:11:55.082 "cntlid": 101, 00:11:55.082 "qid": 0, 00:11:55.082 "state": "enabled", 00:11:55.082 "thread": "nvmf_tgt_poll_group_000", 00:11:55.082 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:11:55.082 "listen_address": { 00:11:55.082 "trtype": "TCP", 00:11:55.082 "adrfam": "IPv4", 00:11:55.082 "traddr": "10.0.0.3", 00:11:55.082 "trsvcid": "4420" 00:11:55.082 }, 00:11:55.082 "peer_address": { 00:11:55.082 "trtype": "TCP", 00:11:55.082 "adrfam": "IPv4", 00:11:55.082 "traddr": "10.0.0.1", 00:11:55.082 "trsvcid": "51274" 00:11:55.082 }, 00:11:55.082 "auth": { 00:11:55.082 "state": "completed", 00:11:55.082 "digest": "sha512", 00:11:55.082 "dhgroup": "null" 00:11:55.082 } 00:11:55.082 } 00:11:55.082 ]' 00:11:55.082 01:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:55.082 01:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:55.082 01:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:55.082 01:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:55.082 01:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:55.082 01:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.082 01:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.082 01:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.340 01:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:01:ZjMyZmI4NWMxNzdkMTQ1OTQzNTExNjQyOTU3YmM4OTYH78Ik: 00:11:55.341 01:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:01:ZjMyZmI4NWMxNzdkMTQ1OTQzNTExNjQyOTU3YmM4OTYH78Ik: 00:11:56.277 01:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.277 01:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:11:56.277 01:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.277 01:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.277 01:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.277 01:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:56.277 01:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:56.277 01:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:56.277 01:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:11:56.277 01:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:56.277 01:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:56.277 01:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:56.277 01:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:56.277 01:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.277 01:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key3 00:11:56.277 01:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.277 01:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.278 01:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.278 01:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:56.278 01:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:56.278 01:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:56.846 00:11:56.846 01:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:56.846 01:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:56.846 01:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:56.846 01:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:56.846 01:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:56.846 01:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.846 01:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.105 01:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.105 01:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:57.105 { 00:11:57.105 "cntlid": 103, 00:11:57.105 "qid": 0, 00:11:57.105 "state": "enabled", 00:11:57.105 "thread": "nvmf_tgt_poll_group_000", 00:11:57.105 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:11:57.105 "listen_address": { 00:11:57.105 "trtype": "TCP", 00:11:57.105 "adrfam": "IPv4", 00:11:57.105 "traddr": "10.0.0.3", 00:11:57.105 "trsvcid": "4420" 00:11:57.105 }, 00:11:57.105 "peer_address": { 00:11:57.105 "trtype": "TCP", 00:11:57.105 "adrfam": "IPv4", 00:11:57.105 "traddr": "10.0.0.1", 00:11:57.105 "trsvcid": "49878" 00:11:57.105 }, 00:11:57.105 "auth": { 00:11:57.105 "state": "completed", 00:11:57.105 "digest": "sha512", 00:11:57.105 "dhgroup": "null" 00:11:57.105 } 00:11:57.105 } 00:11:57.105 ]' 00:11:57.105 01:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:57.105 01:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:57.105 01:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:57.105 01:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:57.105 01:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:57.105 01:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.105 01:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.105 01:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.364 01:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:11:57.364 01:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:11:57.932 01:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:57.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:57.932 01:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:11:57.932 01:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.932 01:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.932 01:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.932 01:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:57.932 01:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:57.932 01:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:57.932 01:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:58.191 01:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:11:58.191 01:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:58.191 01:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:58.191 01:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:58.191 01:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:58.191 01:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.191 01:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.191 01:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.191 01:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.191 01:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.191 01:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.191 01:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.191 01:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.758 00:11:58.758 01:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:58.758 01:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:58.758 01:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.016 01:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.016 01:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.016 01:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.016 01:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.016 01:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.016 01:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:59.016 { 00:11:59.016 "cntlid": 105, 00:11:59.016 "qid": 0, 00:11:59.016 "state": "enabled", 00:11:59.016 "thread": "nvmf_tgt_poll_group_000", 00:11:59.016 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:11:59.016 "listen_address": { 00:11:59.016 "trtype": "TCP", 00:11:59.016 "adrfam": "IPv4", 00:11:59.016 "traddr": "10.0.0.3", 00:11:59.016 "trsvcid": "4420" 00:11:59.016 }, 00:11:59.016 "peer_address": { 00:11:59.016 "trtype": "TCP", 00:11:59.016 "adrfam": "IPv4", 00:11:59.016 "traddr": "10.0.0.1", 00:11:59.016 "trsvcid": "49906" 00:11:59.016 }, 00:11:59.016 "auth": { 00:11:59.016 "state": "completed", 00:11:59.016 "digest": "sha512", 00:11:59.016 "dhgroup": "ffdhe2048" 00:11:59.016 } 00:11:59.016 } 00:11:59.016 ]' 00:11:59.017 01:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:59.017 01:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:59.017 01:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:59.017 01:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:59.017 01:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:59.017 01:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.017 01:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.017 01:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.275 01:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:11:59.275 01:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:11:59.844 01:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:59.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:59.844 01:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:11:59.844 01:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.844 01:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.844 01:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.844 01:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:59.844 01:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:59.844 01:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:00.103 01:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:12:00.103 01:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:00.103 01:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:00.103 01:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:00.103 01:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:00.103 01:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.103 01:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.103 01:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.103 01:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.362 01:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.362 01:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.362 01:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.362 01:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.622 00:12:00.622 01:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:00.622 01:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:00.622 01:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:00.882 01:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:00.882 01:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:00.882 01:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.882 01:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.882 01:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.882 01:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:00.882 { 00:12:00.882 "cntlid": 107, 00:12:00.882 "qid": 0, 00:12:00.882 "state": "enabled", 00:12:00.882 "thread": "nvmf_tgt_poll_group_000", 00:12:00.882 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:12:00.882 "listen_address": { 00:12:00.882 "trtype": "TCP", 00:12:00.882 "adrfam": "IPv4", 00:12:00.882 "traddr": "10.0.0.3", 00:12:00.882 "trsvcid": "4420" 00:12:00.882 }, 00:12:00.882 "peer_address": { 00:12:00.882 "trtype": "TCP", 00:12:00.882 "adrfam": "IPv4", 00:12:00.882 "traddr": "10.0.0.1", 00:12:00.882 "trsvcid": "49936" 00:12:00.882 }, 00:12:00.882 "auth": { 00:12:00.882 "state": "completed", 00:12:00.882 "digest": "sha512", 00:12:00.882 "dhgroup": "ffdhe2048" 00:12:00.882 } 00:12:00.882 } 00:12:00.882 ]' 00:12:00.882 01:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:00.882 01:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:00.882 01:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:00.882 01:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:00.882 01:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:01.141 01:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.141 01:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.141 01:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.400 01:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: --dhchap-ctrl-secret DHHC-1:02:YjQzYjMwMGQ3YzJkY2I1MDJlMTliZmUyYzUzNmE2YTI2ZTk2MTU4ZDAwMThhOGQ3aTCp0w==: 00:12:01.400 01:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: --dhchap-ctrl-secret DHHC-1:02:YjQzYjMwMGQ3YzJkY2I1MDJlMTliZmUyYzUzNmE2YTI2ZTk2MTU4ZDAwMThhOGQ3aTCp0w==: 00:12:01.968 01:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:01.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:01.968 01:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:12:01.968 01:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.968 01:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.968 01:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.968 01:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:01.968 01:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:01.968 01:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:02.227 01:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:12:02.227 01:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:02.227 01:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:02.227 01:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:02.227 01:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:02.227 01:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.227 01:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:02.227 01:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.227 01:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.227 01:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.227 01:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:02.227 01:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:02.227 01:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:02.794 00:12:02.794 01:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:02.794 01:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:02.794 01:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:03.054 01:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.054 01:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.054 01:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.054 01:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.054 01:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.054 01:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:03.054 { 00:12:03.054 "cntlid": 109, 00:12:03.054 "qid": 0, 00:12:03.054 "state": "enabled", 00:12:03.054 "thread": "nvmf_tgt_poll_group_000", 00:12:03.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:12:03.054 "listen_address": { 00:12:03.054 "trtype": "TCP", 00:12:03.054 "adrfam": "IPv4", 00:12:03.054 "traddr": "10.0.0.3", 00:12:03.054 "trsvcid": "4420" 00:12:03.054 }, 00:12:03.054 "peer_address": { 00:12:03.054 "trtype": "TCP", 00:12:03.054 "adrfam": "IPv4", 00:12:03.054 "traddr": "10.0.0.1", 00:12:03.054 "trsvcid": "49960" 00:12:03.054 }, 00:12:03.054 "auth": { 00:12:03.054 "state": "completed", 00:12:03.054 "digest": "sha512", 00:12:03.054 "dhgroup": "ffdhe2048" 00:12:03.054 } 00:12:03.054 } 00:12:03.054 ]' 00:12:03.054 01:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:03.054 01:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:03.054 01:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:03.054 01:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:03.054 01:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:03.054 01:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.054 01:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.054 01:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:03.313 01:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:01:ZjMyZmI4NWMxNzdkMTQ1OTQzNTExNjQyOTU3YmM4OTYH78Ik: 00:12:03.313 01:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:01:ZjMyZmI4NWMxNzdkMTQ1OTQzNTExNjQyOTU3YmM4OTYH78Ik: 00:12:04.251 01:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:04.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:04.251 01:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:12:04.251 01:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.251 01:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.251 01:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.251 01:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:04.251 01:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:04.251 01:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:04.251 01:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:12:04.251 01:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:04.251 01:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:04.251 01:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:04.251 01:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:04.251 01:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:04.251 01:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key3 00:12:04.251 01:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.251 01:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.251 01:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.251 01:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:04.251 01:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:04.251 01:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:04.819 00:12:04.819 01:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:04.819 01:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:04.819 01:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.078 01:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:05.078 01:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:05.078 01:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.078 01:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.078 01:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.078 01:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:05.078 { 00:12:05.078 "cntlid": 111, 00:12:05.078 "qid": 0, 00:12:05.078 "state": "enabled", 00:12:05.078 "thread": "nvmf_tgt_poll_group_000", 00:12:05.078 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:12:05.078 "listen_address": { 00:12:05.078 "trtype": "TCP", 00:12:05.078 "adrfam": "IPv4", 00:12:05.078 "traddr": "10.0.0.3", 00:12:05.078 "trsvcid": "4420" 00:12:05.078 }, 00:12:05.078 "peer_address": { 00:12:05.078 "trtype": "TCP", 00:12:05.078 "adrfam": "IPv4", 00:12:05.078 "traddr": "10.0.0.1", 00:12:05.078 "trsvcid": "49982" 00:12:05.078 }, 00:12:05.078 "auth": { 00:12:05.078 "state": "completed", 00:12:05.078 "digest": "sha512", 00:12:05.078 "dhgroup": "ffdhe2048" 00:12:05.078 } 00:12:05.078 } 00:12:05.078 ]' 00:12:05.078 01:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:05.078 01:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:05.078 01:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:05.078 01:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:05.078 01:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:05.078 01:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.078 01:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.078 01:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:05.337 01:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:12:05.337 01:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:12:05.906 01:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.906 01:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:12:05.906 01:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.906 01:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.906 01:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.906 01:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:05.906 01:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:05.906 01:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:05.906 01:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:06.165 01:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:12:06.165 01:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:06.165 01:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:06.165 01:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:06.165 01:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:06.165 01:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.165 01:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:06.165 01:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.165 01:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.165 01:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.165 01:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:06.165 01:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:06.165 01:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:06.733 00:12:06.733 01:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:06.733 01:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.733 01:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:06.992 01:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.992 01:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.992 01:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.992 01:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.992 01:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.992 01:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:06.992 { 00:12:06.992 "cntlid": 113, 00:12:06.992 "qid": 0, 00:12:06.992 "state": "enabled", 00:12:06.992 "thread": "nvmf_tgt_poll_group_000", 00:12:06.992 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:12:06.992 "listen_address": { 00:12:06.992 "trtype": "TCP", 00:12:06.992 "adrfam": "IPv4", 00:12:06.992 "traddr": "10.0.0.3", 00:12:06.992 "trsvcid": "4420" 00:12:06.992 }, 00:12:06.992 "peer_address": { 00:12:06.992 "trtype": "TCP", 00:12:06.992 "adrfam": "IPv4", 00:12:06.992 "traddr": "10.0.0.1", 00:12:06.992 "trsvcid": "44012" 00:12:06.992 }, 00:12:06.992 "auth": { 00:12:06.992 "state": "completed", 00:12:06.992 "digest": "sha512", 00:12:06.992 "dhgroup": "ffdhe3072" 00:12:06.992 } 00:12:06.992 } 00:12:06.992 ]' 00:12:06.992 01:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:06.992 01:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:06.992 01:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:06.992 01:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:06.992 01:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:06.992 01:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.992 01:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.992 01:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.251 01:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:12:07.251 01:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:12:07.820 01:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.820 01:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:12:07.820 01:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.820 01:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.820 01:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.820 01:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:07.820 01:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:07.820 01:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:08.389 01:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:12:08.389 01:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:08.389 01:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:08.389 01:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:08.389 01:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:08.389 01:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.389 01:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:08.389 01:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.389 01:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.389 01:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.389 01:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:08.389 01:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:08.389 01:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:08.648 00:12:08.648 01:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:08.648 01:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.648 01:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:08.906 01:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.906 01:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.906 01:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.906 01:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.906 01:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.906 01:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:08.906 { 00:12:08.906 "cntlid": 115, 00:12:08.906 "qid": 0, 00:12:08.906 "state": "enabled", 00:12:08.906 "thread": "nvmf_tgt_poll_group_000", 00:12:08.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:12:08.906 "listen_address": { 00:12:08.906 "trtype": "TCP", 00:12:08.906 "adrfam": "IPv4", 00:12:08.906 "traddr": "10.0.0.3", 00:12:08.906 "trsvcid": "4420" 00:12:08.906 }, 00:12:08.906 "peer_address": { 00:12:08.906 "trtype": "TCP", 00:12:08.906 "adrfam": "IPv4", 00:12:08.906 "traddr": "10.0.0.1", 00:12:08.906 "trsvcid": "44034" 00:12:08.906 }, 00:12:08.906 "auth": { 00:12:08.906 "state": "completed", 00:12:08.906 "digest": "sha512", 00:12:08.906 "dhgroup": "ffdhe3072" 00:12:08.906 } 00:12:08.906 } 00:12:08.906 ]' 00:12:08.906 01:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:08.906 01:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:08.906 01:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:08.906 01:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:08.906 01:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:08.906 01:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.906 01:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.906 01:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:09.165 01:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: --dhchap-ctrl-secret DHHC-1:02:YjQzYjMwMGQ3YzJkY2I1MDJlMTliZmUyYzUzNmE2YTI2ZTk2MTU4ZDAwMThhOGQ3aTCp0w==: 00:12:09.165 01:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: --dhchap-ctrl-secret DHHC-1:02:YjQzYjMwMGQ3YzJkY2I1MDJlMTliZmUyYzUzNmE2YTI2ZTk2MTU4ZDAwMThhOGQ3aTCp0w==: 00:12:09.732 01:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.732 01:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:12:09.732 01:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.732 01:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.732 01:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.732 01:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:09.732 01:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:09.732 01:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:10.301 01:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:12:10.301 01:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:10.301 01:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:10.301 01:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:10.301 01:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:10.301 01:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.301 01:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:10.301 01:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.301 01:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.301 01:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.301 01:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:10.301 01:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:10.301 01:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:10.560 00:12:10.560 01:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:10.560 01:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.560 01:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:10.820 01:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.820 01:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.820 01:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.820 01:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.820 01:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.820 01:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:10.820 { 00:12:10.820 "cntlid": 117, 00:12:10.820 "qid": 0, 00:12:10.820 "state": "enabled", 00:12:10.820 "thread": "nvmf_tgt_poll_group_000", 00:12:10.820 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:12:10.820 "listen_address": { 00:12:10.820 "trtype": "TCP", 00:12:10.820 "adrfam": "IPv4", 00:12:10.820 "traddr": "10.0.0.3", 00:12:10.820 "trsvcid": "4420" 00:12:10.820 }, 00:12:10.820 "peer_address": { 00:12:10.820 "trtype": "TCP", 00:12:10.820 "adrfam": "IPv4", 00:12:10.820 "traddr": "10.0.0.1", 00:12:10.820 "trsvcid": "44058" 00:12:10.820 }, 00:12:10.820 "auth": { 00:12:10.820 "state": "completed", 00:12:10.820 "digest": "sha512", 00:12:10.820 "dhgroup": "ffdhe3072" 00:12:10.820 } 00:12:10.820 } 00:12:10.820 ]' 00:12:10.820 01:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:10.820 01:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:10.820 01:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:10.820 01:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:10.820 01:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:10.820 01:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.820 01:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.820 01:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.079 01:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:01:ZjMyZmI4NWMxNzdkMTQ1OTQzNTExNjQyOTU3YmM4OTYH78Ik: 00:12:11.079 01:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:01:ZjMyZmI4NWMxNzdkMTQ1OTQzNTExNjQyOTU3YmM4OTYH78Ik: 00:12:11.646 01:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.646 01:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:12:11.646 01:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.646 01:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.904 01:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.904 01:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:11.905 01:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:11.905 01:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:11.905 01:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:12:11.905 01:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:11.905 01:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:11.905 01:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:11.905 01:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:11.905 01:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.905 01:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key3 00:12:11.905 01:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.905 01:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.905 01:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.905 01:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:11.905 01:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:11.905 01:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:12.473 00:12:12.473 01:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:12.473 01:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:12.473 01:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.473 01:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.473 01:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.473 01:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.473 01:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.473 01:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.473 01:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:12.473 { 00:12:12.473 "cntlid": 119, 00:12:12.473 "qid": 0, 00:12:12.473 "state": "enabled", 00:12:12.473 "thread": "nvmf_tgt_poll_group_000", 00:12:12.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:12:12.473 "listen_address": { 00:12:12.473 "trtype": "TCP", 00:12:12.473 "adrfam": "IPv4", 00:12:12.473 "traddr": "10.0.0.3", 00:12:12.473 "trsvcid": "4420" 00:12:12.473 }, 00:12:12.473 "peer_address": { 00:12:12.473 "trtype": "TCP", 00:12:12.473 "adrfam": "IPv4", 00:12:12.473 "traddr": "10.0.0.1", 00:12:12.473 "trsvcid": "44090" 00:12:12.473 }, 00:12:12.473 "auth": { 00:12:12.473 "state": "completed", 00:12:12.473 "digest": "sha512", 00:12:12.473 "dhgroup": "ffdhe3072" 00:12:12.473 } 00:12:12.473 } 00:12:12.473 ]' 00:12:12.473 01:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:12.733 01:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:12.733 01:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:12.733 01:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:12.733 01:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:12.733 01:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.733 01:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.733 01:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.994 01:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:12:12.994 01:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:12:13.563 01:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.563 01:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:12:13.563 01:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.563 01:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.563 01:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.563 01:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:13.563 01:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:13.563 01:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:13.563 01:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:13.823 01:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:12:13.823 01:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:13.823 01:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:13.823 01:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:13.823 01:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:13.823 01:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.823 01:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.823 01:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.823 01:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.823 01:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.823 01:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.823 01:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.823 01:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:14.391 00:12:14.391 01:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:14.391 01:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:14.391 01:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.650 01:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.650 01:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.650 01:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.650 01:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.650 01:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.650 01:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:14.650 { 00:12:14.650 "cntlid": 121, 00:12:14.650 "qid": 0, 00:12:14.650 "state": "enabled", 00:12:14.650 "thread": "nvmf_tgt_poll_group_000", 00:12:14.650 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:12:14.650 "listen_address": { 00:12:14.650 "trtype": "TCP", 00:12:14.650 "adrfam": "IPv4", 00:12:14.650 "traddr": "10.0.0.3", 00:12:14.650 "trsvcid": "4420" 00:12:14.650 }, 00:12:14.650 "peer_address": { 00:12:14.650 "trtype": "TCP", 00:12:14.650 "adrfam": "IPv4", 00:12:14.650 "traddr": "10.0.0.1", 00:12:14.650 "trsvcid": "44114" 00:12:14.650 }, 00:12:14.650 "auth": { 00:12:14.650 "state": "completed", 00:12:14.650 "digest": "sha512", 00:12:14.650 "dhgroup": "ffdhe4096" 00:12:14.650 } 00:12:14.650 } 00:12:14.650 ]' 00:12:14.650 01:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:14.650 01:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:14.650 01:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:14.650 01:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:14.650 01:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:14.650 01:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.650 01:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.650 01:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.910 01:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:12:14.910 01:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:12:15.478 01:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.478 01:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:12:15.478 01:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.478 01:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.478 01:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.478 01:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:15.478 01:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:15.478 01:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:15.738 01:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:12:15.738 01:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:15.738 01:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:15.738 01:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:15.738 01:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:15.738 01:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.738 01:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:15.738 01:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.738 01:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.738 01:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.738 01:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:15.738 01:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:15.738 01:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:16.306 00:12:16.306 01:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:16.306 01:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.306 01:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:16.564 01:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.564 01:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.564 01:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.564 01:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.564 01:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.564 01:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:16.564 { 00:12:16.564 "cntlid": 123, 00:12:16.564 "qid": 0, 00:12:16.564 "state": "enabled", 00:12:16.564 "thread": "nvmf_tgt_poll_group_000", 00:12:16.564 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:12:16.564 "listen_address": { 00:12:16.565 "trtype": "TCP", 00:12:16.565 "adrfam": "IPv4", 00:12:16.565 "traddr": "10.0.0.3", 00:12:16.565 "trsvcid": "4420" 00:12:16.565 }, 00:12:16.565 "peer_address": { 00:12:16.565 "trtype": "TCP", 00:12:16.565 "adrfam": "IPv4", 00:12:16.565 "traddr": "10.0.0.1", 00:12:16.565 "trsvcid": "39118" 00:12:16.565 }, 00:12:16.565 "auth": { 00:12:16.565 "state": "completed", 00:12:16.565 "digest": "sha512", 00:12:16.565 "dhgroup": "ffdhe4096" 00:12:16.565 } 00:12:16.565 } 00:12:16.565 ]' 00:12:16.565 01:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:16.565 01:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:16.565 01:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:16.565 01:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:16.565 01:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:16.823 01:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.823 01:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.823 01:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.081 01:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: --dhchap-ctrl-secret DHHC-1:02:YjQzYjMwMGQ3YzJkY2I1MDJlMTliZmUyYzUzNmE2YTI2ZTk2MTU4ZDAwMThhOGQ3aTCp0w==: 00:12:17.082 01:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: --dhchap-ctrl-secret DHHC-1:02:YjQzYjMwMGQ3YzJkY2I1MDJlMTliZmUyYzUzNmE2YTI2ZTk2MTU4ZDAwMThhOGQ3aTCp0w==: 00:12:17.648 01:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.648 01:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:12:17.648 01:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.648 01:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.648 01:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.648 01:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:17.648 01:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:17.648 01:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:17.907 01:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:12:17.907 01:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:17.907 01:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:17.907 01:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:17.907 01:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:17.907 01:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.907 01:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.907 01:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.907 01:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.907 01:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.907 01:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.907 01:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:17.907 01:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:18.474 00:12:18.474 01:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:18.474 01:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.474 01:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:18.735 01:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.735 01:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.735 01:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.735 01:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.735 01:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.735 01:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:18.735 { 00:12:18.735 "cntlid": 125, 00:12:18.735 "qid": 0, 00:12:18.735 "state": "enabled", 00:12:18.735 "thread": "nvmf_tgt_poll_group_000", 00:12:18.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:12:18.735 "listen_address": { 00:12:18.735 "trtype": "TCP", 00:12:18.735 "adrfam": "IPv4", 00:12:18.735 "traddr": "10.0.0.3", 00:12:18.735 "trsvcid": "4420" 00:12:18.735 }, 00:12:18.735 "peer_address": { 00:12:18.735 "trtype": "TCP", 00:12:18.735 "adrfam": "IPv4", 00:12:18.735 "traddr": "10.0.0.1", 00:12:18.735 "trsvcid": "39148" 00:12:18.735 }, 00:12:18.735 "auth": { 00:12:18.735 "state": "completed", 00:12:18.735 "digest": "sha512", 00:12:18.735 "dhgroup": "ffdhe4096" 00:12:18.735 } 00:12:18.735 } 00:12:18.735 ]' 00:12:18.735 01:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:18.735 01:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:18.735 01:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:18.735 01:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:18.735 01:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:18.735 01:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.735 01:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.735 01:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.302 01:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:01:ZjMyZmI4NWMxNzdkMTQ1OTQzNTExNjQyOTU3YmM4OTYH78Ik: 00:12:19.302 01:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:01:ZjMyZmI4NWMxNzdkMTQ1OTQzNTExNjQyOTU3YmM4OTYH78Ik: 00:12:19.871 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.871 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:12:19.871 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.871 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.871 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.871 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:19.871 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:19.871 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:20.130 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:12:20.130 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:20.130 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:20.130 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:20.130 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:20.130 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.130 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key3 00:12:20.130 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.130 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.130 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.130 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:20.130 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:20.130 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:20.390 00:12:20.390 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:20.390 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.390 01:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:20.958 01:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.958 01:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.958 01:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.958 01:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.958 01:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.958 01:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:20.958 { 00:12:20.958 "cntlid": 127, 00:12:20.958 "qid": 0, 00:12:20.958 "state": "enabled", 00:12:20.958 "thread": "nvmf_tgt_poll_group_000", 00:12:20.958 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:12:20.958 "listen_address": { 00:12:20.958 "trtype": "TCP", 00:12:20.958 "adrfam": "IPv4", 00:12:20.958 "traddr": "10.0.0.3", 00:12:20.958 "trsvcid": "4420" 00:12:20.958 }, 00:12:20.958 "peer_address": { 00:12:20.958 "trtype": "TCP", 00:12:20.958 "adrfam": "IPv4", 00:12:20.958 "traddr": "10.0.0.1", 00:12:20.958 "trsvcid": "39180" 00:12:20.958 }, 00:12:20.958 "auth": { 00:12:20.958 "state": "completed", 00:12:20.958 "digest": "sha512", 00:12:20.958 "dhgroup": "ffdhe4096" 00:12:20.958 } 00:12:20.958 } 00:12:20.958 ]' 00:12:20.958 01:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:20.958 01:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:20.958 01:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:20.958 01:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:20.958 01:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:20.958 01:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.958 01:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.958 01:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.217 01:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:12:21.217 01:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:12:22.154 01:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.154 01:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:12:22.154 01:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.154 01:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.154 01:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.154 01:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:22.154 01:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:22.154 01:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:22.154 01:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:22.154 01:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:12:22.154 01:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:22.154 01:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:22.154 01:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:22.154 01:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:22.154 01:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.154 01:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:22.154 01:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.154 01:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.154 01:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.154 01:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:22.154 01:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:22.154 01:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:22.722 00:12:22.722 01:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:22.722 01:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:22.722 01:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.290 01:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.290 01:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.290 01:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.290 01:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.290 01:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.290 01:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:23.290 { 00:12:23.290 "cntlid": 129, 00:12:23.290 "qid": 0, 00:12:23.290 "state": "enabled", 00:12:23.290 "thread": "nvmf_tgt_poll_group_000", 00:12:23.290 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:12:23.290 "listen_address": { 00:12:23.290 "trtype": "TCP", 00:12:23.290 "adrfam": "IPv4", 00:12:23.290 "traddr": "10.0.0.3", 00:12:23.290 "trsvcid": "4420" 00:12:23.290 }, 00:12:23.290 "peer_address": { 00:12:23.290 "trtype": "TCP", 00:12:23.290 "adrfam": "IPv4", 00:12:23.290 "traddr": "10.0.0.1", 00:12:23.290 "trsvcid": "39214" 00:12:23.290 }, 00:12:23.290 "auth": { 00:12:23.290 "state": "completed", 00:12:23.290 "digest": "sha512", 00:12:23.290 "dhgroup": "ffdhe6144" 00:12:23.290 } 00:12:23.290 } 00:12:23.290 ]' 00:12:23.290 01:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:23.290 01:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:23.290 01:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:23.290 01:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:23.290 01:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:23.290 01:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.290 01:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.290 01:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.550 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:12:23.550 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:12:24.117 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.117 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:12:24.117 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.117 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.117 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.117 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:24.117 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:24.117 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:24.376 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:12:24.376 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:24.376 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:24.376 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:24.376 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:24.376 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.376 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:24.376 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.376 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.376 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.376 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:24.376 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:24.376 01:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:24.943 00:12:24.943 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:24.943 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.943 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:25.203 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.203 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.203 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.203 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.203 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.203 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:25.203 { 00:12:25.203 "cntlid": 131, 00:12:25.203 "qid": 0, 00:12:25.203 "state": "enabled", 00:12:25.203 "thread": "nvmf_tgt_poll_group_000", 00:12:25.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:12:25.203 "listen_address": { 00:12:25.203 "trtype": "TCP", 00:12:25.203 "adrfam": "IPv4", 00:12:25.203 "traddr": "10.0.0.3", 00:12:25.203 "trsvcid": "4420" 00:12:25.203 }, 00:12:25.203 "peer_address": { 00:12:25.203 "trtype": "TCP", 00:12:25.203 "adrfam": "IPv4", 00:12:25.203 "traddr": "10.0.0.1", 00:12:25.203 "trsvcid": "39234" 00:12:25.203 }, 00:12:25.203 "auth": { 00:12:25.203 "state": "completed", 00:12:25.203 "digest": "sha512", 00:12:25.203 "dhgroup": "ffdhe6144" 00:12:25.203 } 00:12:25.203 } 00:12:25.203 ]' 00:12:25.203 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:25.203 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:25.203 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:25.203 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:25.203 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:25.462 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.462 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.462 01:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:25.720 01:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: --dhchap-ctrl-secret DHHC-1:02:YjQzYjMwMGQ3YzJkY2I1MDJlMTliZmUyYzUzNmE2YTI2ZTk2MTU4ZDAwMThhOGQ3aTCp0w==: 00:12:25.720 01:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: --dhchap-ctrl-secret DHHC-1:02:YjQzYjMwMGQ3YzJkY2I1MDJlMTliZmUyYzUzNmE2YTI2ZTk2MTU4ZDAwMThhOGQ3aTCp0w==: 00:12:26.320 01:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.320 01:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:12:26.320 01:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.320 01:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.320 01:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.320 01:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:26.320 01:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:26.320 01:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:26.578 01:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:12:26.578 01:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:26.578 01:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:26.578 01:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:26.578 01:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:26.578 01:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:26.578 01:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:26.579 01:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.579 01:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.579 01:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.579 01:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:26.579 01:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:26.579 01:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:26.837 00:12:27.097 01:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:27.097 01:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:27.097 01:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:27.356 01:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.356 01:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.356 01:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.356 01:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.356 01:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.356 01:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:27.356 { 00:12:27.356 "cntlid": 133, 00:12:27.356 "qid": 0, 00:12:27.356 "state": "enabled", 00:12:27.356 "thread": "nvmf_tgt_poll_group_000", 00:12:27.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:12:27.356 "listen_address": { 00:12:27.356 "trtype": "TCP", 00:12:27.356 "adrfam": "IPv4", 00:12:27.356 "traddr": "10.0.0.3", 00:12:27.356 "trsvcid": "4420" 00:12:27.356 }, 00:12:27.356 "peer_address": { 00:12:27.356 "trtype": "TCP", 00:12:27.356 "adrfam": "IPv4", 00:12:27.356 "traddr": "10.0.0.1", 00:12:27.356 "trsvcid": "56622" 00:12:27.356 }, 00:12:27.356 "auth": { 00:12:27.356 "state": "completed", 00:12:27.356 "digest": "sha512", 00:12:27.356 "dhgroup": "ffdhe6144" 00:12:27.356 } 00:12:27.356 } 00:12:27.356 ]' 00:12:27.356 01:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:27.356 01:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:27.357 01:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:27.357 01:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:27.357 01:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:27.357 01:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.357 01:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.357 01:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.616 01:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:01:ZjMyZmI4NWMxNzdkMTQ1OTQzNTExNjQyOTU3YmM4OTYH78Ik: 00:12:27.616 01:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:01:ZjMyZmI4NWMxNzdkMTQ1OTQzNTExNjQyOTU3YmM4OTYH78Ik: 00:12:28.553 01:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.553 01:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:12:28.553 01:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.553 01:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.553 01:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.553 01:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:28.553 01:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:28.553 01:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:28.810 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:12:28.810 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:28.810 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:28.810 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:28.810 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:28.810 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.810 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key3 00:12:28.810 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.810 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.810 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.810 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:28.810 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:28.811 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:29.068 00:12:29.068 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:29.068 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:29.068 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.327 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.327 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.327 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.327 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.327 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.327 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:29.327 { 00:12:29.327 "cntlid": 135, 00:12:29.327 "qid": 0, 00:12:29.327 "state": "enabled", 00:12:29.327 "thread": "nvmf_tgt_poll_group_000", 00:12:29.327 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:12:29.327 "listen_address": { 00:12:29.327 "trtype": "TCP", 00:12:29.327 "adrfam": "IPv4", 00:12:29.327 "traddr": "10.0.0.3", 00:12:29.327 "trsvcid": "4420" 00:12:29.327 }, 00:12:29.327 "peer_address": { 00:12:29.327 "trtype": "TCP", 00:12:29.327 "adrfam": "IPv4", 00:12:29.327 "traddr": "10.0.0.1", 00:12:29.327 "trsvcid": "56644" 00:12:29.327 }, 00:12:29.327 "auth": { 00:12:29.327 "state": "completed", 00:12:29.327 "digest": "sha512", 00:12:29.327 "dhgroup": "ffdhe6144" 00:12:29.327 } 00:12:29.327 } 00:12:29.327 ]' 00:12:29.327 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:29.586 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:29.586 01:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:29.586 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:29.586 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:29.586 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.586 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.586 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.845 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:12:29.845 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:12:30.413 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.413 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:12:30.413 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.413 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.413 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.413 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:30.413 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:30.413 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:30.413 01:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:30.673 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:12:30.673 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:30.673 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:30.673 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:30.673 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:30.673 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.673 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.673 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.673 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.673 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.673 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.673 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.673 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:31.611 00:12:31.611 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:31.611 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.611 01:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:31.611 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.611 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.611 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.611 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.611 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.611 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:31.611 { 00:12:31.611 "cntlid": 137, 00:12:31.611 "qid": 0, 00:12:31.611 "state": "enabled", 00:12:31.611 "thread": "nvmf_tgt_poll_group_000", 00:12:31.611 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:12:31.611 "listen_address": { 00:12:31.611 "trtype": "TCP", 00:12:31.611 "adrfam": "IPv4", 00:12:31.611 "traddr": "10.0.0.3", 00:12:31.611 "trsvcid": "4420" 00:12:31.611 }, 00:12:31.611 "peer_address": { 00:12:31.611 "trtype": "TCP", 00:12:31.611 "adrfam": "IPv4", 00:12:31.611 "traddr": "10.0.0.1", 00:12:31.611 "trsvcid": "56668" 00:12:31.611 }, 00:12:31.611 "auth": { 00:12:31.611 "state": "completed", 00:12:31.611 "digest": "sha512", 00:12:31.611 "dhgroup": "ffdhe8192" 00:12:31.611 } 00:12:31.611 } 00:12:31.611 ]' 00:12:31.611 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:31.611 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:31.611 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:31.870 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:31.870 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:31.870 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.870 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.870 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:32.129 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:12:32.129 01:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:12:32.697 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.698 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:12:32.698 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.698 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.698 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.698 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:32.698 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:32.698 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:32.957 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:12:32.957 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:32.957 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:32.957 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:32.957 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:32.957 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.957 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.957 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.957 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.957 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.957 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.957 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.957 01:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.894 00:12:33.894 01:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:33.894 01:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:33.894 01:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.894 01:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.894 01:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.894 01:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.894 01:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.894 01:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.894 01:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:33.894 { 00:12:33.894 "cntlid": 139, 00:12:33.894 "qid": 0, 00:12:33.894 "state": "enabled", 00:12:33.894 "thread": "nvmf_tgt_poll_group_000", 00:12:33.894 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:12:33.894 "listen_address": { 00:12:33.894 "trtype": "TCP", 00:12:33.894 "adrfam": "IPv4", 00:12:33.894 "traddr": "10.0.0.3", 00:12:33.894 "trsvcid": "4420" 00:12:33.894 }, 00:12:33.894 "peer_address": { 00:12:33.894 "trtype": "TCP", 00:12:33.894 "adrfam": "IPv4", 00:12:33.894 "traddr": "10.0.0.1", 00:12:33.894 "trsvcid": "56702" 00:12:33.894 }, 00:12:33.894 "auth": { 00:12:33.894 "state": "completed", 00:12:33.894 "digest": "sha512", 00:12:33.894 "dhgroup": "ffdhe8192" 00:12:33.894 } 00:12:33.894 } 00:12:33.894 ]' 00:12:33.894 01:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:34.153 01:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:34.153 01:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:34.153 01:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:34.153 01:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:34.153 01:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:34.153 01:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:34.153 01:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.412 01:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: --dhchap-ctrl-secret DHHC-1:02:YjQzYjMwMGQ3YzJkY2I1MDJlMTliZmUyYzUzNmE2YTI2ZTk2MTU4ZDAwMThhOGQ3aTCp0w==: 00:12:34.412 01:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: --dhchap-ctrl-secret DHHC-1:02:YjQzYjMwMGQ3YzJkY2I1MDJlMTliZmUyYzUzNmE2YTI2ZTk2MTU4ZDAwMThhOGQ3aTCp0w==: 00:12:34.979 01:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.979 01:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:12:34.979 01:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.979 01:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.979 01:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.979 01:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:34.979 01:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:34.979 01:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:35.238 01:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:12:35.238 01:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:35.238 01:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:35.238 01:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:35.238 01:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:35.238 01:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.238 01:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.238 01:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.238 01:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.238 01:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.238 01:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.238 01:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.238 01:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:36.175 00:12:36.175 01:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:36.175 01:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:36.175 01:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.433 01:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.433 01:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.433 01:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.433 01:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.434 01:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.434 01:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:36.434 { 00:12:36.434 "cntlid": 141, 00:12:36.434 "qid": 0, 00:12:36.434 "state": "enabled", 00:12:36.434 "thread": "nvmf_tgt_poll_group_000", 00:12:36.434 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:12:36.434 "listen_address": { 00:12:36.434 "trtype": "TCP", 00:12:36.434 "adrfam": "IPv4", 00:12:36.434 "traddr": "10.0.0.3", 00:12:36.434 "trsvcid": "4420" 00:12:36.434 }, 00:12:36.434 "peer_address": { 00:12:36.434 "trtype": "TCP", 00:12:36.434 "adrfam": "IPv4", 00:12:36.434 "traddr": "10.0.0.1", 00:12:36.434 "trsvcid": "36756" 00:12:36.434 }, 00:12:36.434 "auth": { 00:12:36.434 "state": "completed", 00:12:36.434 "digest": "sha512", 00:12:36.434 "dhgroup": "ffdhe8192" 00:12:36.434 } 00:12:36.434 } 00:12:36.434 ]' 00:12:36.434 01:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:36.434 01:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:36.434 01:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:36.434 01:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:36.434 01:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:36.434 01:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.434 01:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.434 01:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.692 01:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:01:ZjMyZmI4NWMxNzdkMTQ1OTQzNTExNjQyOTU3YmM4OTYH78Ik: 00:12:36.693 01:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:01:ZjMyZmI4NWMxNzdkMTQ1OTQzNTExNjQyOTU3YmM4OTYH78Ik: 00:12:37.259 01:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.519 01:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:12:37.519 01:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.519 01:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.519 01:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.519 01:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:37.519 01:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:37.519 01:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:37.779 01:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:12:37.779 01:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:37.779 01:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:37.779 01:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:37.779 01:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:37.779 01:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.779 01:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key3 00:12:37.779 01:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.779 01:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.779 01:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.779 01:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:37.779 01:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:37.779 01:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:38.369 00:12:38.369 01:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:38.369 01:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:38.369 01:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.628 01:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.628 01:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.628 01:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.628 01:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.628 01:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.628 01:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:38.628 { 00:12:38.628 "cntlid": 143, 00:12:38.628 "qid": 0, 00:12:38.628 "state": "enabled", 00:12:38.628 "thread": "nvmf_tgt_poll_group_000", 00:12:38.628 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:12:38.628 "listen_address": { 00:12:38.628 "trtype": "TCP", 00:12:38.628 "adrfam": "IPv4", 00:12:38.628 "traddr": "10.0.0.3", 00:12:38.628 "trsvcid": "4420" 00:12:38.628 }, 00:12:38.628 "peer_address": { 00:12:38.628 "trtype": "TCP", 00:12:38.628 "adrfam": "IPv4", 00:12:38.628 "traddr": "10.0.0.1", 00:12:38.628 "trsvcid": "36782" 00:12:38.628 }, 00:12:38.628 "auth": { 00:12:38.628 "state": "completed", 00:12:38.628 "digest": "sha512", 00:12:38.628 "dhgroup": "ffdhe8192" 00:12:38.628 } 00:12:38.628 } 00:12:38.628 ]' 00:12:38.628 01:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:38.886 01:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:38.886 01:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:38.886 01:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:38.886 01:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:38.886 01:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.886 01:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.886 01:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:39.144 01:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:12:39.144 01:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:12:40.081 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:40.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:40.081 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:12:40.081 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.081 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.081 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.081 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:12:40.081 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:12:40.081 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:12:40.081 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:40.081 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:40.081 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:40.081 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:12:40.081 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:40.081 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:40.081 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:40.081 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:40.081 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:40.081 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.081 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.081 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.081 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.081 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.081 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.081 01:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.673 00:12:40.673 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:40.673 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:40.673 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.932 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.932 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.932 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.932 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.932 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.932 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:40.932 { 00:12:40.932 "cntlid": 145, 00:12:40.932 "qid": 0, 00:12:40.932 "state": "enabled", 00:12:40.932 "thread": "nvmf_tgt_poll_group_000", 00:12:40.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:12:40.932 "listen_address": { 00:12:40.932 "trtype": "TCP", 00:12:40.932 "adrfam": "IPv4", 00:12:40.932 "traddr": "10.0.0.3", 00:12:40.932 "trsvcid": "4420" 00:12:40.932 }, 00:12:40.932 "peer_address": { 00:12:40.932 "trtype": "TCP", 00:12:40.932 "adrfam": "IPv4", 00:12:40.932 "traddr": "10.0.0.1", 00:12:40.932 "trsvcid": "36804" 00:12:40.932 }, 00:12:40.932 "auth": { 00:12:40.932 "state": "completed", 00:12:40.932 "digest": "sha512", 00:12:40.932 "dhgroup": "ffdhe8192" 00:12:40.932 } 00:12:40.932 } 00:12:40.932 ]' 00:12:40.932 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:41.191 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:41.191 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:41.191 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:41.191 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:41.191 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:41.192 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:41.192 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.450 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:12:41.450 01:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:00:ZWRiMjIxNTI1ZWQ5OWQ0ZWQ3N2FjYmQwMmIwOTEwZGZhZTY0MzAwODM1MDhkMTY4Yz27RA==: --dhchap-ctrl-secret DHHC-1:03:YjZmNjM2MGNjZGUyZjIyNGU0NTZmNDQ4YjU1MjhmZWQwNDVlYmRlYmFhN2JmYzA1NDFkOTExNzI1NjcwNjAwNmATqMY=: 00:12:42.019 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:42.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:42.019 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:12:42.019 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.019 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.019 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.019 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key1 00:12:42.019 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.019 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.019 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.019 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:12:42.019 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:42.019 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:12:42.019 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:42.019 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:42.019 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:42.019 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:42.019 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:12:42.019 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:12:42.019 01:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:12:42.587 request: 00:12:42.587 { 00:12:42.587 "name": "nvme0", 00:12:42.587 "trtype": "tcp", 00:12:42.587 "traddr": "10.0.0.3", 00:12:42.587 "adrfam": "ipv4", 00:12:42.587 "trsvcid": "4420", 00:12:42.587 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:42.587 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:12:42.587 "prchk_reftag": false, 00:12:42.587 "prchk_guard": false, 00:12:42.587 "hdgst": false, 00:12:42.587 "ddgst": false, 00:12:42.587 "dhchap_key": "key2", 00:12:42.587 "allow_unrecognized_csi": false, 00:12:42.587 "method": "bdev_nvme_attach_controller", 00:12:42.587 "req_id": 1 00:12:42.587 } 00:12:42.587 Got JSON-RPC error response 00:12:42.587 response: 00:12:42.587 { 00:12:42.587 "code": -5, 00:12:42.587 "message": "Input/output error" 00:12:42.587 } 00:12:42.845 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:42.845 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:42.845 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:42.845 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:42.845 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:12:42.845 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.845 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.845 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.845 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.845 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.845 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.845 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.845 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:42.845 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:42.845 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:42.845 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:42.845 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:42.846 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:42.846 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:42.846 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:42.846 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:42.846 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:43.413 request: 00:12:43.413 { 00:12:43.413 "name": "nvme0", 00:12:43.413 "trtype": "tcp", 00:12:43.413 "traddr": "10.0.0.3", 00:12:43.413 "adrfam": "ipv4", 00:12:43.413 "trsvcid": "4420", 00:12:43.413 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:43.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:12:43.413 "prchk_reftag": false, 00:12:43.413 "prchk_guard": false, 00:12:43.413 "hdgst": false, 00:12:43.413 "ddgst": false, 00:12:43.413 "dhchap_key": "key1", 00:12:43.413 "dhchap_ctrlr_key": "ckey2", 00:12:43.413 "allow_unrecognized_csi": false, 00:12:43.413 "method": "bdev_nvme_attach_controller", 00:12:43.413 "req_id": 1 00:12:43.413 } 00:12:43.413 Got JSON-RPC error response 00:12:43.413 response: 00:12:43.413 { 00:12:43.413 "code": -5, 00:12:43.413 "message": "Input/output error" 00:12:43.413 } 00:12:43.413 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:43.413 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:43.413 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:43.413 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:43.413 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:12:43.413 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.413 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.413 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.413 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key1 00:12:43.413 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.413 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.413 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.413 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:43.413 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:43.413 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:43.413 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:43.413 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:43.413 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:43.413 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:43.413 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:43.413 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:43.413 01:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:43.981 request: 00:12:43.981 { 00:12:43.981 "name": "nvme0", 00:12:43.981 "trtype": "tcp", 00:12:43.981 "traddr": "10.0.0.3", 00:12:43.981 "adrfam": "ipv4", 00:12:43.981 "trsvcid": "4420", 00:12:43.981 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:43.981 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:12:43.982 "prchk_reftag": false, 00:12:43.982 "prchk_guard": false, 00:12:43.982 "hdgst": false, 00:12:43.982 "ddgst": false, 00:12:43.982 "dhchap_key": "key1", 00:12:43.982 "dhchap_ctrlr_key": "ckey1", 00:12:43.982 "allow_unrecognized_csi": false, 00:12:43.982 "method": "bdev_nvme_attach_controller", 00:12:43.982 "req_id": 1 00:12:43.982 } 00:12:43.982 Got JSON-RPC error response 00:12:43.982 response: 00:12:43.982 { 00:12:43.982 "code": -5, 00:12:43.982 "message": "Input/output error" 00:12:43.982 } 00:12:43.982 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:43.982 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:43.982 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:43.982 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:43.982 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:12:43.982 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.982 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.982 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.982 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 79078 00:12:43.982 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 79078 ']' 00:12:43.982 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 79078 00:12:43.982 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:12:43.982 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:43.982 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79078 00:12:43.982 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:43.982 killing process with pid 79078 00:12:43.982 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:43.982 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79078' 00:12:43.982 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 79078 00:12:43.982 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 79078 00:12:43.982 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:12:43.982 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:43.982 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:43.982 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.241 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=82148 00:12:44.241 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 82148 00:12:44.241 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 82148 ']' 00:12:44.241 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:12:44.241 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.241 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:44.241 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.241 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:44.241 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.501 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:44.501 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:44.501 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:44.501 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:44.501 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.501 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:44.501 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:44.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.501 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 82148 00:12:44.501 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 82148 ']' 00:12:44.501 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.501 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:44.501 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.501 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:44.501 01:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.759 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:44.759 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.760 null0 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.w1A 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.1Xo ]] 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1Xo 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.XC3 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.XoE ]] 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.XoE 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.nyF 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.L9q ]] 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.L9q 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.dpn 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key3 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:44.760 01:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:45.697 nvme0n1 00:12:45.956 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:45.956 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:45.956 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.214 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.214 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.214 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.214 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.214 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.214 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:46.214 { 00:12:46.214 "cntlid": 1, 00:12:46.214 "qid": 0, 00:12:46.214 "state": "enabled", 00:12:46.214 "thread": "nvmf_tgt_poll_group_000", 00:12:46.214 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:12:46.214 "listen_address": { 00:12:46.214 "trtype": "TCP", 00:12:46.214 "adrfam": "IPv4", 00:12:46.214 "traddr": "10.0.0.3", 00:12:46.214 "trsvcid": "4420" 00:12:46.214 }, 00:12:46.214 "peer_address": { 00:12:46.214 "trtype": "TCP", 00:12:46.214 "adrfam": "IPv4", 00:12:46.214 "traddr": "10.0.0.1", 00:12:46.214 "trsvcid": "36874" 00:12:46.214 }, 00:12:46.214 "auth": { 00:12:46.214 "state": "completed", 00:12:46.214 "digest": "sha512", 00:12:46.214 "dhgroup": "ffdhe8192" 00:12:46.214 } 00:12:46.214 } 00:12:46.214 ]' 00:12:46.214 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:46.214 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:46.214 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:46.214 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:46.214 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:46.214 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:46.214 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.214 01:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.782 01:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:12:46.782 01:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:12:47.349 01:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.349 01:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:12:47.349 01:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.349 01:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.349 01:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.349 01:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key3 00:12:47.349 01:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.349 01:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.349 01:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.349 01:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:12:47.349 01:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:12:47.609 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:12:47.609 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:47.609 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:12:47.609 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:47.609 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:47.609 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:47.609 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:47.609 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:47.609 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:47.609 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:47.868 request: 00:12:47.868 { 00:12:47.868 "name": "nvme0", 00:12:47.868 "trtype": "tcp", 00:12:47.868 "traddr": "10.0.0.3", 00:12:47.868 "adrfam": "ipv4", 00:12:47.868 "trsvcid": "4420", 00:12:47.868 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:47.868 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:12:47.868 "prchk_reftag": false, 00:12:47.868 "prchk_guard": false, 00:12:47.868 "hdgst": false, 00:12:47.868 "ddgst": false, 00:12:47.868 "dhchap_key": "key3", 00:12:47.868 "allow_unrecognized_csi": false, 00:12:47.868 "method": "bdev_nvme_attach_controller", 00:12:47.868 "req_id": 1 00:12:47.868 } 00:12:47.868 Got JSON-RPC error response 00:12:47.868 response: 00:12:47.868 { 00:12:47.868 "code": -5, 00:12:47.868 "message": "Input/output error" 00:12:47.868 } 00:12:47.868 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:47.868 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:47.868 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:47.868 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:47.868 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:12:47.868 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:12:47.868 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:47.868 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:48.127 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:12:48.127 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:48.127 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:12:48.127 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:48.127 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:48.127 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:48.127 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:48.127 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:48.127 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:48.127 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:48.699 request: 00:12:48.699 { 00:12:48.699 "name": "nvme0", 00:12:48.699 "trtype": "tcp", 00:12:48.699 "traddr": "10.0.0.3", 00:12:48.699 "adrfam": "ipv4", 00:12:48.699 "trsvcid": "4420", 00:12:48.699 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:48.699 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:12:48.699 "prchk_reftag": false, 00:12:48.699 "prchk_guard": false, 00:12:48.699 "hdgst": false, 00:12:48.699 "ddgst": false, 00:12:48.699 "dhchap_key": "key3", 00:12:48.699 "allow_unrecognized_csi": false, 00:12:48.699 "method": "bdev_nvme_attach_controller", 00:12:48.699 "req_id": 1 00:12:48.699 } 00:12:48.699 Got JSON-RPC error response 00:12:48.699 response: 00:12:48.699 { 00:12:48.699 "code": -5, 00:12:48.699 "message": "Input/output error" 00:12:48.699 } 00:12:48.699 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:48.699 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:48.699 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:48.699 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:48.699 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:12:48.699 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:12:48.699 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:12:48.699 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:48.699 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:48.699 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:48.699 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:12:48.699 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.699 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.699 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.699 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:12:48.699 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.699 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.699 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.699 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:48.699 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:48.699 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:48.699 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:48.962 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:48.962 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:48.962 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:48.962 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:48.962 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:48.962 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:49.222 request: 00:12:49.222 { 00:12:49.222 "name": "nvme0", 00:12:49.222 "trtype": "tcp", 00:12:49.222 "traddr": "10.0.0.3", 00:12:49.222 "adrfam": "ipv4", 00:12:49.222 "trsvcid": "4420", 00:12:49.222 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:49.222 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:12:49.222 "prchk_reftag": false, 00:12:49.222 "prchk_guard": false, 00:12:49.222 "hdgst": false, 00:12:49.222 "ddgst": false, 00:12:49.222 "dhchap_key": "key0", 00:12:49.222 "dhchap_ctrlr_key": "key1", 00:12:49.222 "allow_unrecognized_csi": false, 00:12:49.222 "method": "bdev_nvme_attach_controller", 00:12:49.222 "req_id": 1 00:12:49.222 } 00:12:49.222 Got JSON-RPC error response 00:12:49.222 response: 00:12:49.222 { 00:12:49.222 "code": -5, 00:12:49.222 "message": "Input/output error" 00:12:49.222 } 00:12:49.222 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:49.222 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:49.222 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:49.222 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:49.222 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:12:49.222 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:12:49.222 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:12:49.480 nvme0n1 00:12:49.481 01:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:12:49.481 01:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.481 01:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:12:49.739 01:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.739 01:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.739 01:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:50.315 01:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key1 00:12:50.315 01:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.315 01:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.315 01:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.315 01:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:12:50.315 01:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:50.315 01:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:51.252 nvme0n1 00:12:51.252 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:12:51.252 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:12:51.252 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.252 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.252 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:51.252 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.252 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.511 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.511 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:12:51.511 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:12:51.511 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.770 01:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.770 01:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:12:51.770 01:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid 7cdc77f7-6c10-48d3-83fa-703a290bdf89 -l 0 --dhchap-secret DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: --dhchap-ctrl-secret DHHC-1:03:OTQ1ZjJiYTEwZDBhMDVkM2JjMWU2Mzk4Yjg0ZmI2MTNhY2UwNWQ2Y2Y0OTNhYzYzZTMyNDI4MzM2OTgyOWE2MO2P43U=: 00:12:52.337 01:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:12:52.337 01:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:12:52.337 01:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:12:52.337 01:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:12:52.337 01:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:12:52.337 01:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:12:52.337 01:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:12:52.337 01:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:52.337 01:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:52.595 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:12:52.596 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:12:52.596 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:12:52.596 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:12:52.596 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:52.596 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:12:52.596 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:52.596 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:12:52.596 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:52.596 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:53.162 request: 00:12:53.162 { 00:12:53.162 "name": "nvme0", 00:12:53.162 "trtype": "tcp", 00:12:53.162 "traddr": "10.0.0.3", 00:12:53.162 "adrfam": "ipv4", 00:12:53.162 "trsvcid": "4420", 00:12:53.162 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:53.162 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89", 00:12:53.162 "prchk_reftag": false, 00:12:53.162 "prchk_guard": false, 00:12:53.162 "hdgst": false, 00:12:53.162 "ddgst": false, 00:12:53.162 "dhchap_key": "key1", 00:12:53.162 "allow_unrecognized_csi": false, 00:12:53.162 "method": "bdev_nvme_attach_controller", 00:12:53.162 "req_id": 1 00:12:53.162 } 00:12:53.162 Got JSON-RPC error response 00:12:53.162 response: 00:12:53.162 { 00:12:53.162 "code": -5, 00:12:53.162 "message": "Input/output error" 00:12:53.162 } 00:12:53.420 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:12:53.420 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:53.421 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:53.421 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:53.421 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:53.421 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:53.421 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:54.357 nvme0n1 00:12:54.357 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:12:54.357 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:12:54.357 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.616 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:54.616 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:54.616 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.875 01:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:12:54.875 01:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.875 01:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.875 01:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.875 01:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:12:54.875 01:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:12:54.875 01:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:12:55.133 nvme0n1 00:12:55.133 01:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:12:55.133 01:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:12:55.133 01:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.392 01:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.392 01:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.392 01:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.651 01:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:55.651 01:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.651 01:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.651 01:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.651 01:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: '' 2s 00:12:55.651 01:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:12:55.651 01:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:12:55.651 01:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: 00:12:55.651 01:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:12:55.651 01:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:12:55.651 01:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:12:55.651 01:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: ]] 00:12:55.651 01:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MTlkZDgwYTRmMTc3ZWY3OTJhZjAwZDNjZmRiZTY0MGV55DJh: 00:12:55.651 01:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:12:55.651 01:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:12:55.651 01:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:12:58.200 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:12:58.200 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:12:58.200 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:12:58.200 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:12:58.200 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:12:58.200 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:12:58.200 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:12:58.200 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key1 --dhchap-ctrlr-key key2 00:12:58.200 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.200 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.200 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.200 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: 2s 00:12:58.200 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:12:58.200 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:12:58.200 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:12:58.200 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: 00:12:58.200 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:12:58.200 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:12:58.200 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:12:58.200 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: ]] 00:12:58.200 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:Y2Y2ODM1MDI5YjgwZTQ2MWRjNWI4NTRiNzcwMTgwYWY4ZmZmMGJhYjJiMTAyZGQym2YjvA==: 00:12:58.200 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:12:58.200 01:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:13:00.106 01:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:13:00.106 01:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:13:00.106 01:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:13:00.106 01:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:13:00.106 01:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:13:00.106 01:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:13:00.106 01:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:13:00.106 01:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:00.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:00.106 01:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:00.106 01:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.106 01:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.106 01:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.106 01:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:00.106 01:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:00.106 01:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:01.043 nvme0n1 00:13:01.043 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:01.043 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.043 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.043 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.043 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:01.043 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:01.609 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:13:01.609 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:13:01.609 01:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.867 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.867 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:13:01.867 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.867 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.867 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.867 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:13:01.867 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:13:02.126 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:13:02.126 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.126 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:13:02.385 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.385 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:02.385 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.385 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.385 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.385 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:02.385 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:02.385 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:02.385 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:13:02.385 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:02.385 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:13:02.385 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:02.385 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:02.385 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:02.988 request: 00:13:02.988 { 00:13:02.988 "name": "nvme0", 00:13:02.988 "dhchap_key": "key1", 00:13:02.988 "dhchap_ctrlr_key": "key3", 00:13:02.988 "method": "bdev_nvme_set_keys", 00:13:02.988 "req_id": 1 00:13:02.988 } 00:13:02.988 Got JSON-RPC error response 00:13:02.988 response: 00:13:02.988 { 00:13:02.988 "code": -13, 00:13:02.988 "message": "Permission denied" 00:13:02.988 } 00:13:02.988 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:02.988 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:02.988 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:02.988 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:02.988 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:13:02.988 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:13:02.988 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:03.248 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:13:03.248 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:13:04.186 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:13:04.186 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.186 01:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:13:04.445 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:13:04.445 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:04.445 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.445 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.704 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.704 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:04.704 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:04.704 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:05.642 nvme0n1 00:13:05.642 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:05.642 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.642 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.642 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.642 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:05.642 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:05.642 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:05.642 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:13:05.642 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:05.642 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:13:05.642 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:05.642 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:05.642 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:06.210 request: 00:13:06.210 { 00:13:06.210 "name": "nvme0", 00:13:06.210 "dhchap_key": "key2", 00:13:06.210 "dhchap_ctrlr_key": "key0", 00:13:06.210 "method": "bdev_nvme_set_keys", 00:13:06.210 "req_id": 1 00:13:06.210 } 00:13:06.211 Got JSON-RPC error response 00:13:06.211 response: 00:13:06.211 { 00:13:06.211 "code": -13, 00:13:06.211 "message": "Permission denied" 00:13:06.211 } 00:13:06.211 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:06.211 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:06.211 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:06.211 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:06.211 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:13:06.211 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:13:06.211 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.470 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:13:06.470 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:13:07.407 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:13:07.407 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.407 01:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:13:07.667 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:13:07.667 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:13:07.667 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:13:07.667 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 79110 00:13:07.667 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 79110 ']' 00:13:07.667 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 79110 00:13:07.667 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:13:07.667 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:07.667 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79110 00:13:07.667 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:07.667 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:07.667 killing process with pid 79110 00:13:07.667 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79110' 00:13:07.667 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 79110 00:13:07.667 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 79110 00:13:07.927 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:13:07.927 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:07.927 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:13:07.927 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:07.927 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:13:07.927 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:07.927 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:07.927 rmmod nvme_tcp 00:13:07.927 rmmod nvme_fabrics 00:13:07.927 rmmod nvme_keyring 00:13:08.186 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:08.186 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:13:08.186 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:13:08.186 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 82148 ']' 00:13:08.186 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 82148 00:13:08.186 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 82148 ']' 00:13:08.186 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 82148 00:13:08.186 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:13:08.186 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:08.187 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82148 00:13:08.187 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:08.187 killing process with pid 82148 00:13:08.187 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:08.187 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82148' 00:13:08.187 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 82148 00:13:08.187 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 82148 00:13:08.187 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:08.187 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:08.187 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:08.187 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:13:08.187 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:08.187 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:13:08.187 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:13:08.187 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:08.187 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:08.187 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:08.187 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:08.187 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:08.187 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:08.187 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:08.187 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:08.187 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:08.187 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:08.446 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:08.446 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:08.446 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:08.446 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:08.446 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:08.446 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:08.446 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.446 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:08.446 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.446 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:13:08.446 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.w1A /tmp/spdk.key-sha256.XC3 /tmp/spdk.key-sha384.nyF /tmp/spdk.key-sha512.dpn /tmp/spdk.key-sha512.1Xo /tmp/spdk.key-sha384.XoE /tmp/spdk.key-sha256.L9q '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:13:08.446 ************************************ 00:13:08.446 END TEST nvmf_auth_target 00:13:08.446 ************************************ 00:13:08.446 00:13:08.446 real 3m8.995s 00:13:08.446 user 7m31.975s 00:13:08.446 sys 0m28.456s 00:13:08.446 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:08.446 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.446 01:54:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:13:08.446 01:54:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:08.446 01:54:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:08.446 01:54:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.446 01:54:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:08.446 ************************************ 00:13:08.446 START TEST nvmf_bdevio_no_huge 00:13:08.446 ************************************ 00:13:08.446 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:08.706 * Looking for test storage... 00:13:08.706 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:08.706 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:08.706 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:13:08.706 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:08.706 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:08.706 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:08.706 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:08.706 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:08.706 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:08.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.707 --rc genhtml_branch_coverage=1 00:13:08.707 --rc genhtml_function_coverage=1 00:13:08.707 --rc genhtml_legend=1 00:13:08.707 --rc geninfo_all_blocks=1 00:13:08.707 --rc geninfo_unexecuted_blocks=1 00:13:08.707 00:13:08.707 ' 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:08.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.707 --rc genhtml_branch_coverage=1 00:13:08.707 --rc genhtml_function_coverage=1 00:13:08.707 --rc genhtml_legend=1 00:13:08.707 --rc geninfo_all_blocks=1 00:13:08.707 --rc geninfo_unexecuted_blocks=1 00:13:08.707 00:13:08.707 ' 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:08.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.707 --rc genhtml_branch_coverage=1 00:13:08.707 --rc genhtml_function_coverage=1 00:13:08.707 --rc genhtml_legend=1 00:13:08.707 --rc geninfo_all_blocks=1 00:13:08.707 --rc geninfo_unexecuted_blocks=1 00:13:08.707 00:13:08.707 ' 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:08.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:08.707 --rc genhtml_branch_coverage=1 00:13:08.707 --rc genhtml_function_coverage=1 00:13:08.707 --rc genhtml_legend=1 00:13:08.707 --rc geninfo_all_blocks=1 00:13:08.707 --rc geninfo_unexecuted_blocks=1 00:13:08.707 00:13:08.707 ' 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:08.707 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:08.707 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:08.708 Cannot find device "nvmf_init_br" 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:08.708 Cannot find device "nvmf_init_br2" 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:08.708 Cannot find device "nvmf_tgt_br" 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:08.708 Cannot find device "nvmf_tgt_br2" 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:08.708 Cannot find device "nvmf_init_br" 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:13:08.708 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:08.708 Cannot find device "nvmf_init_br2" 00:13:08.966 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:13:08.966 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:08.966 Cannot find device "nvmf_tgt_br" 00:13:08.966 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:13:08.966 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:08.966 Cannot find device "nvmf_tgt_br2" 00:13:08.966 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:13:08.966 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:08.966 Cannot find device "nvmf_br" 00:13:08.966 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:13:08.966 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:08.966 Cannot find device "nvmf_init_if" 00:13:08.966 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:13:08.966 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:08.966 Cannot find device "nvmf_init_if2" 00:13:08.966 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:13:08.966 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:08.967 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:08.967 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:13:08.967 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:08.967 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:08.967 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:13:08.967 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:08.967 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:08.967 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:08.967 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:08.967 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:08.967 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:08.967 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:08.967 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:08.967 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:08.967 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:08.967 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:08.967 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:08.967 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:08.967 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:08.967 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:08.967 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:08.967 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:08.967 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:08.967 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:08.967 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:09.226 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:09.226 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.118 ms 00:13:09.226 00:13:09.226 --- 10.0.0.3 ping statistics --- 00:13:09.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.226 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:09.226 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:09.226 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.112 ms 00:13:09.226 00:13:09.226 --- 10.0.0.4 ping statistics --- 00:13:09.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.226 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:09.226 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:09.226 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:13:09.226 00:13:09.226 --- 10.0.0.1 ping statistics --- 00:13:09.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.226 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:09.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:09.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:13:09.226 00:13:09.226 --- 10.0.0.2 ping statistics --- 00:13:09.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.226 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=82778 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 82778 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 82778 ']' 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:09.226 01:54:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:09.226 [2024-11-19 01:54:19.771945] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:13:09.226 [2024-11-19 01:54:19.772071] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:09.486 [2024-11-19 01:54:19.931574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:09.486 [2024-11-19 01:54:19.986044] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.486 [2024-11-19 01:54:19.986125] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.486 [2024-11-19 01:54:19.986148] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:09.486 [2024-11-19 01:54:19.986158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:09.486 [2024-11-19 01:54:19.986167] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.486 [2024-11-19 01:54:19.987238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:09.486 [2024-11-19 01:54:19.987383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:13:09.486 [2024-11-19 01:54:19.987528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:13:09.486 [2024-11-19 01:54:19.987532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:09.486 [2024-11-19 01:54:19.993482] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:09.745 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:09.745 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:13:09.745 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:09.745 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:09.745 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:09.745 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:09.745 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:09.745 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.745 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:09.745 [2024-11-19 01:54:20.177168] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:09.745 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.745 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:09.745 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.745 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:09.745 Malloc0 00:13:09.745 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.745 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:09.745 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.745 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:09.745 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.745 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:09.745 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.745 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:09.745 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.745 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:09.745 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.745 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:09.745 [2024-11-19 01:54:20.217904] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:09.745 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.745 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:09.745 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:09.745 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:13:09.745 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:13:09.745 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:09.745 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:09.745 { 00:13:09.745 "params": { 00:13:09.745 "name": "Nvme$subsystem", 00:13:09.745 "trtype": "$TEST_TRANSPORT", 00:13:09.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:09.745 "adrfam": "ipv4", 00:13:09.745 "trsvcid": "$NVMF_PORT", 00:13:09.745 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:09.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:09.745 "hdgst": ${hdgst:-false}, 00:13:09.745 "ddgst": ${ddgst:-false} 00:13:09.745 }, 00:13:09.745 "method": "bdev_nvme_attach_controller" 00:13:09.745 } 00:13:09.745 EOF 00:13:09.745 )") 00:13:09.745 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:13:09.745 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:13:09.745 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:13:09.745 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:09.745 "params": { 00:13:09.745 "name": "Nvme1", 00:13:09.745 "trtype": "tcp", 00:13:09.745 "traddr": "10.0.0.3", 00:13:09.745 "adrfam": "ipv4", 00:13:09.745 "trsvcid": "4420", 00:13:09.745 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:09.745 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:09.745 "hdgst": false, 00:13:09.745 "ddgst": false 00:13:09.745 }, 00:13:09.745 "method": "bdev_nvme_attach_controller" 00:13:09.745 }' 00:13:09.745 [2024-11-19 01:54:20.275976] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:13:09.745 [2024-11-19 01:54:20.276079] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid82801 ] 00:13:10.004 [2024-11-19 01:54:20.431537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:10.004 [2024-11-19 01:54:20.491200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:10.004 [2024-11-19 01:54:20.491307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:10.004 [2024-11-19 01:54:20.491318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.004 [2024-11-19 01:54:20.505824] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:10.264 I/O targets: 00:13:10.264 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:10.264 00:13:10.264 00:13:10.264 CUnit - A unit testing framework for C - Version 2.1-3 00:13:10.264 http://cunit.sourceforge.net/ 00:13:10.264 00:13:10.264 00:13:10.264 Suite: bdevio tests on: Nvme1n1 00:13:10.264 Test: blockdev write read block ...passed 00:13:10.264 Test: blockdev write zeroes read block ...passed 00:13:10.264 Test: blockdev write zeroes read no split ...passed 00:13:10.264 Test: blockdev write zeroes read split ...passed 00:13:10.264 Test: blockdev write zeroes read split partial ...passed 00:13:10.264 Test: blockdev reset ...[2024-11-19 01:54:20.730599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:13:10.264 [2024-11-19 01:54:20.730696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf97f00 (9): Bad file descriptor 00:13:10.264 [2024-11-19 01:54:20.750424] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:13:10.264 passed 00:13:10.264 Test: blockdev write read 8 blocks ...passed 00:13:10.264 Test: blockdev write read size > 128k ...passed 00:13:10.264 Test: blockdev write read invalid size ...passed 00:13:10.264 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:10.264 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:10.264 Test: blockdev write read max offset ...passed 00:13:10.264 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:10.264 Test: blockdev writev readv 8 blocks ...passed 00:13:10.264 Test: blockdev writev readv 30 x 1block ...passed 00:13:10.264 Test: blockdev writev readv block ...passed 00:13:10.264 Test: blockdev writev readv size > 128k ...passed 00:13:10.264 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:10.264 Test: blockdev comparev and writev ...[2024-11-19 01:54:20.758941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:10.264 [2024-11-19 01:54:20.759110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:10.264 [2024-11-19 01:54:20.759234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:10.264 [2024-11-19 01:54:20.759324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:10.264 [2024-11-19 01:54:20.759735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:10.264 [2024-11-19 01:54:20.759868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:10.264 [2024-11-19 01:54:20.759984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:10.264 [2024-11-19 01:54:20.760106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:10.264 [2024-11-19 01:54:20.760532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:10.264 [2024-11-19 01:54:20.760666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:10.264 [2024-11-19 01:54:20.760773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:10.264 [2024-11-19 01:54:20.760856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:10.264 [2024-11-19 01:54:20.761256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:10.264 [2024-11-19 01:54:20.761373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:10.264 [2024-11-19 01:54:20.761478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:10.264 [2024-11-19 01:54:20.761587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:10.264 passed 00:13:10.264 Test: blockdev nvme passthru rw ...passed 00:13:10.264 Test: blockdev nvme passthru vendor specific ...[2024-11-19 01:54:20.762574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:10.264 [2024-11-19 01:54:20.762691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:10.264 [2024-11-19 01:54:20.762945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:10.264 [2024-11-19 01:54:20.763064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:10.264 [2024-11-19 01:54:20.763263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:10.264 [2024-11-19 01:54:20.763378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:10.264 [2024-11-19 01:54:20.763590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:10.264 [2024-11-19 01:54:20.763708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:10.264 passed 00:13:10.264 Test: blockdev nvme admin passthru ...passed 00:13:10.264 Test: blockdev copy ...passed 00:13:10.264 00:13:10.264 Run Summary: Type Total Ran Passed Failed Inactive 00:13:10.264 suites 1 1 n/a 0 0 00:13:10.264 tests 23 23 23 0 0 00:13:10.264 asserts 152 152 152 0 n/a 00:13:10.264 00:13:10.264 Elapsed time = 0.175 seconds 00:13:10.523 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.523 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.523 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:10.523 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.523 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:10.523 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:13:10.523 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:10.523 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:13:10.523 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:10.523 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:13:10.523 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:10.523 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:10.523 rmmod nvme_tcp 00:13:10.523 rmmod nvme_fabrics 00:13:10.523 rmmod nvme_keyring 00:13:10.782 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:10.782 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:13:10.782 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:13:10.782 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 82778 ']' 00:13:10.782 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 82778 00:13:10.782 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 82778 ']' 00:13:10.782 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 82778 00:13:10.782 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:13:10.782 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:10.782 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82778 00:13:10.782 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:13:10.782 killing process with pid 82778 00:13:10.782 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:13:10.782 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82778' 00:13:10.782 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 82778 00:13:10.782 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 82778 00:13:11.041 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:11.041 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:11.041 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:11.041 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:13:11.041 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:13:11.041 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:11.041 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:13:11.041 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:11.041 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:11.041 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:11.041 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:11.041 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:11.041 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:11.041 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:11.041 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:11.041 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:11.041 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:11.041 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:11.041 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:11.300 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:11.300 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:11.300 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:11.300 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:11.300 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.300 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:11.300 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.300 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:13:11.300 00:13:11.300 real 0m2.762s 00:13:11.300 user 0m7.156s 00:13:11.300 sys 0m1.287s 00:13:11.300 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:11.300 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:11.300 ************************************ 00:13:11.300 END TEST nvmf_bdevio_no_huge 00:13:11.300 ************************************ 00:13:11.300 01:54:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:11.300 01:54:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:11.300 01:54:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:11.300 01:54:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:11.300 ************************************ 00:13:11.300 START TEST nvmf_tls 00:13:11.300 ************************************ 00:13:11.300 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:11.560 * Looking for test storage... 00:13:11.560 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:11.560 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:11.560 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:13:11.560 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:11.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.560 --rc genhtml_branch_coverage=1 00:13:11.560 --rc genhtml_function_coverage=1 00:13:11.560 --rc genhtml_legend=1 00:13:11.560 --rc geninfo_all_blocks=1 00:13:11.560 --rc geninfo_unexecuted_blocks=1 00:13:11.560 00:13:11.560 ' 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:11.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.560 --rc genhtml_branch_coverage=1 00:13:11.560 --rc genhtml_function_coverage=1 00:13:11.560 --rc genhtml_legend=1 00:13:11.560 --rc geninfo_all_blocks=1 00:13:11.560 --rc geninfo_unexecuted_blocks=1 00:13:11.560 00:13:11.560 ' 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:11.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.560 --rc genhtml_branch_coverage=1 00:13:11.560 --rc genhtml_function_coverage=1 00:13:11.560 --rc genhtml_legend=1 00:13:11.560 --rc geninfo_all_blocks=1 00:13:11.560 --rc geninfo_unexecuted_blocks=1 00:13:11.560 00:13:11.560 ' 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:11.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.560 --rc genhtml_branch_coverage=1 00:13:11.560 --rc genhtml_function_coverage=1 00:13:11.560 --rc genhtml_legend=1 00:13:11.560 --rc geninfo_all_blocks=1 00:13:11.560 --rc geninfo_unexecuted_blocks=1 00:13:11.560 00:13:11.560 ' 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:11.560 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:11.561 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:11.561 Cannot find device "nvmf_init_br" 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:11.561 Cannot find device "nvmf_init_br2" 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:11.561 Cannot find device "nvmf_tgt_br" 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:11.561 Cannot find device "nvmf_tgt_br2" 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:11.561 Cannot find device "nvmf_init_br" 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:11.561 Cannot find device "nvmf_init_br2" 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:11.561 Cannot find device "nvmf_tgt_br" 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:13:11.561 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:11.820 Cannot find device "nvmf_tgt_br2" 00:13:11.820 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:13:11.820 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:11.820 Cannot find device "nvmf_br" 00:13:11.820 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:13:11.820 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:11.820 Cannot find device "nvmf_init_if" 00:13:11.820 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:13:11.820 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:11.820 Cannot find device "nvmf_init_if2" 00:13:11.820 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:13:11.820 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:11.820 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:11.820 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:13:11.820 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:11.820 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:11.820 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:13:11.820 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:11.820 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:11.820 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:11.820 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:11.820 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:11.820 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:11.820 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:11.820 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:11.820 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:11.820 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:11.820 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:11.820 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:11.820 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:11.820 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:11.820 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:11.820 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:11.820 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:11.820 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:11.820 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:11.820 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:11.820 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:11.820 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:11.820 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:11.820 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:11.820 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:12.079 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:12.079 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:12.079 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:12.079 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:12.079 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:12.079 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:12.079 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:12.079 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:12.079 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:12.079 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:13:12.079 00:13:12.079 --- 10.0.0.3 ping statistics --- 00:13:12.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.079 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:13:12.079 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:12.079 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:12.079 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:13:12.079 00:13:12.079 --- 10.0.0.4 ping statistics --- 00:13:12.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.079 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:13:12.079 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:12.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:12.079 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:13:12.079 00:13:12.079 --- 10.0.0.1 ping statistics --- 00:13:12.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.079 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:13:12.079 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:12.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:12.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:13:12.079 00:13:12.079 --- 10.0.0.2 ping statistics --- 00:13:12.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.079 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:13:12.079 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:12.079 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:13:12.079 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:12.079 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:12.079 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:12.079 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:12.079 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:12.079 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:12.079 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:12.079 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:12.079 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:12.079 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:12.079 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:12.079 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83031 00:13:12.079 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:12.079 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83031 00:13:12.079 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83031 ']' 00:13:12.079 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.079 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:12.079 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.079 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:12.080 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:12.080 [2024-11-19 01:54:22.588211] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:13:12.080 [2024-11-19 01:54:22.588338] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.337 [2024-11-19 01:54:22.744577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.337 [2024-11-19 01:54:22.768008] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:12.337 [2024-11-19 01:54:22.768083] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:12.337 [2024-11-19 01:54:22.768096] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:12.337 [2024-11-19 01:54:22.768105] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:12.337 [2024-11-19 01:54:22.768114] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:12.337 [2024-11-19 01:54:22.768521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:12.337 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:12.337 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:12.337 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:12.337 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:12.337 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:12.337 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:12.337 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:13:12.337 01:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:12.595 true 00:13:12.595 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:12.595 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:13:13.162 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:13:13.162 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:13:13.162 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:13.420 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:13.420 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:13:13.420 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:13:13.420 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:13:13.420 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:13.679 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:13:13.679 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:14.247 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:13:14.247 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:13:14.247 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:14.247 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:13:14.247 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:13:14.247 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:13:14.247 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:14.505 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:14.505 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:13:14.763 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:13:14.763 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:13:14.763 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:15.024 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:13:15.024 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:15.288 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:13:15.288 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:13:15.288 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:15.288 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:15.288 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:15.288 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:15.288 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:13:15.288 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:13:15.288 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:15.288 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:15.288 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:15.288 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:15.288 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:15.288 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:15.288 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:13:15.288 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:13:15.288 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:15.547 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:15.547 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:15.547 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.bLNyANlV1d 00:13:15.547 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:13:15.547 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.8oahl20J48 00:13:15.547 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:15.547 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:15.547 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.bLNyANlV1d 00:13:15.547 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.8oahl20J48 00:13:15.547 01:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:15.805 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:16.064 [2024-11-19 01:54:26.502998] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:16.064 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.bLNyANlV1d 00:13:16.064 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.bLNyANlV1d 00:13:16.064 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:16.322 [2024-11-19 01:54:26.806046] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:16.322 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:16.582 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:16.841 [2024-11-19 01:54:27.278140] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:16.841 [2024-11-19 01:54:27.278366] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:16.841 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:17.100 malloc0 00:13:17.100 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:17.359 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.bLNyANlV1d 00:13:17.618 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:17.878 01:54:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.bLNyANlV1d 00:13:30.082 Initializing NVMe Controllers 00:13:30.082 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:30.082 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:30.082 Initialization complete. Launching workers. 00:13:30.082 ======================================================== 00:13:30.082 Latency(us) 00:13:30.082 Device Information : IOPS MiB/s Average min max 00:13:30.082 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9976.79 38.97 6416.45 1641.43 8750.80 00:13:30.082 ======================================================== 00:13:30.082 Total : 9976.79 38.97 6416.45 1641.43 8750.80 00:13:30.082 00:13:30.082 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bLNyANlV1d 00:13:30.082 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:30.082 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:30.082 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:30.082 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.bLNyANlV1d 00:13:30.082 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:30.082 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83267 00:13:30.082 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:30.082 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83267 /var/tmp/bdevperf.sock 00:13:30.082 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83267 ']' 00:13:30.082 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:30.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:30.082 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:30.082 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:30.083 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:30.083 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:30.083 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:30.083 [2024-11-19 01:54:38.611681] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:13:30.083 [2024-11-19 01:54:38.611784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83267 ] 00:13:30.083 [2024-11-19 01:54:38.761643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.083 [2024-11-19 01:54:38.786676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:30.083 [2024-11-19 01:54:38.820379] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:30.083 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:30.083 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:30.083 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.bLNyANlV1d 00:13:30.083 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:30.083 [2024-11-19 01:54:39.411623] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:30.083 TLSTESTn1 00:13:30.083 01:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:30.083 Running I/O for 10 seconds... 00:13:31.015 4298.00 IOPS, 16.79 MiB/s [2024-11-19T01:54:43.005Z] 4354.00 IOPS, 17.01 MiB/s [2024-11-19T01:54:43.940Z] 4339.00 IOPS, 16.95 MiB/s [2024-11-19T01:54:44.873Z] 4327.25 IOPS, 16.90 MiB/s [2024-11-19T01:54:45.806Z] 4338.20 IOPS, 16.95 MiB/s [2024-11-19T01:54:46.740Z] 4320.33 IOPS, 16.88 MiB/s [2024-11-19T01:54:47.674Z] 4323.71 IOPS, 16.89 MiB/s [2024-11-19T01:54:49.049Z] 4328.50 IOPS, 16.91 MiB/s [2024-11-19T01:54:49.614Z] 4322.67 IOPS, 16.89 MiB/s [2024-11-19T01:54:49.872Z] 4323.70 IOPS, 16.89 MiB/s 00:13:39.257 Latency(us) 00:13:39.257 [2024-11-19T01:54:49.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:39.257 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:39.257 Verification LBA range: start 0x0 length 0x2000 00:13:39.257 TLSTESTn1 : 10.02 4329.40 16.91 0.00 0.00 29513.31 5391.83 23712.12 00:13:39.257 [2024-11-19T01:54:49.872Z] =================================================================================================================== 00:13:39.257 [2024-11-19T01:54:49.872Z] Total : 4329.40 16.91 0.00 0.00 29513.31 5391.83 23712.12 00:13:39.257 { 00:13:39.257 "results": [ 00:13:39.257 { 00:13:39.257 "job": "TLSTESTn1", 00:13:39.257 "core_mask": "0x4", 00:13:39.257 "workload": "verify", 00:13:39.257 "status": "finished", 00:13:39.257 "verify_range": { 00:13:39.257 "start": 0, 00:13:39.257 "length": 8192 00:13:39.257 }, 00:13:39.257 "queue_depth": 128, 00:13:39.257 "io_size": 4096, 00:13:39.257 "runtime": 10.016178, 00:13:39.257 "iops": 4329.395903307629, 00:13:39.257 "mibps": 16.911702747295426, 00:13:39.257 "io_failed": 0, 00:13:39.257 "io_timeout": 0, 00:13:39.257 "avg_latency_us": 29513.30619282019, 00:13:39.257 "min_latency_us": 5391.825454545455, 00:13:39.257 "max_latency_us": 23712.116363636364 00:13:39.257 } 00:13:39.257 ], 00:13:39.258 "core_count": 1 00:13:39.258 } 00:13:39.258 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:39.258 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 83267 00:13:39.258 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83267 ']' 00:13:39.258 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83267 00:13:39.258 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:39.258 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:39.258 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83267 00:13:39.258 killing process with pid 83267 00:13:39.258 Received shutdown signal, test time was about 10.000000 seconds 00:13:39.258 00:13:39.258 Latency(us) 00:13:39.258 [2024-11-19T01:54:49.873Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:39.258 [2024-11-19T01:54:49.873Z] =================================================================================================================== 00:13:39.258 [2024-11-19T01:54:49.873Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:39.258 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:39.258 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:39.258 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83267' 00:13:39.258 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83267 00:13:39.258 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83267 00:13:39.258 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8oahl20J48 00:13:39.258 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:39.258 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8oahl20J48 00:13:39.258 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:39.258 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:39.258 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:39.258 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:39.258 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8oahl20J48 00:13:39.258 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:39.258 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:39.258 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:39.258 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.8oahl20J48 00:13:39.258 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:39.258 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:39.258 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83394 00:13:39.258 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:39.258 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83394 /var/tmp/bdevperf.sock 00:13:39.258 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83394 ']' 00:13:39.258 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:39.258 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:39.258 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:39.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:39.258 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:39.258 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:39.258 [2024-11-19 01:54:49.861489] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:13:39.258 [2024-11-19 01:54:49.861596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83394 ] 00:13:39.517 [2024-11-19 01:54:50.002334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.517 [2024-11-19 01:54:50.024047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:39.517 [2024-11-19 01:54:50.054645] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:39.517 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:39.517 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:39.517 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.8oahl20J48 00:13:40.083 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:40.084 [2024-11-19 01:54:50.691904] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:40.084 [2024-11-19 01:54:50.697697] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spd[2024-11-19 01:54:50.697746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x181ca00 (107): Transport endpoint is not connected 00:13:40.084 k_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:40.084 [2024-11-19 01:54:50.698742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x181ca00 (9): Bad file descriptor 00:13:40.084 [2024-11-19 01:54:50.699750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:13:40.084 [2024-11-19 01:54:50.700055] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:40.084 [2024-11-19 01:54:50.700091] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:13:40.084 [2024-11-19 01:54:50.700111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:13:40.342 request: 00:13:40.342 { 00:13:40.342 "name": "TLSTEST", 00:13:40.342 "trtype": "tcp", 00:13:40.342 "traddr": "10.0.0.3", 00:13:40.342 "adrfam": "ipv4", 00:13:40.342 "trsvcid": "4420", 00:13:40.342 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:40.342 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:40.342 "prchk_reftag": false, 00:13:40.342 "prchk_guard": false, 00:13:40.342 "hdgst": false, 00:13:40.342 "ddgst": false, 00:13:40.342 "psk": "key0", 00:13:40.342 "allow_unrecognized_csi": false, 00:13:40.342 "method": "bdev_nvme_attach_controller", 00:13:40.342 "req_id": 1 00:13:40.342 } 00:13:40.342 Got JSON-RPC error response 00:13:40.342 response: 00:13:40.342 { 00:13:40.342 "code": -5, 00:13:40.342 "message": "Input/output error" 00:13:40.342 } 00:13:40.342 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83394 00:13:40.342 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83394 ']' 00:13:40.342 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83394 00:13:40.343 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:40.343 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:40.343 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83394 00:13:40.343 killing process with pid 83394 00:13:40.343 Received shutdown signal, test time was about 10.000000 seconds 00:13:40.343 00:13:40.343 Latency(us) 00:13:40.343 [2024-11-19T01:54:50.958Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:40.343 [2024-11-19T01:54:50.958Z] =================================================================================================================== 00:13:40.343 [2024-11-19T01:54:50.958Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:40.343 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:40.343 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:40.343 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83394' 00:13:40.343 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83394 00:13:40.343 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83394 00:13:40.343 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:40.343 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:40.343 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:40.343 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:40.343 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:40.343 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.bLNyANlV1d 00:13:40.343 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:40.343 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.bLNyANlV1d 00:13:40.343 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:40.343 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:40.343 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:40.343 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:40.343 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.bLNyANlV1d 00:13:40.343 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:40.343 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:40.343 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:13:40.343 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.bLNyANlV1d 00:13:40.343 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:40.343 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83415 00:13:40.343 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:40.343 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:40.343 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83415 /var/tmp/bdevperf.sock 00:13:40.343 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83415 ']' 00:13:40.343 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:40.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:40.343 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:40.343 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:40.343 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:40.343 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:40.343 [2024-11-19 01:54:50.920098] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:13:40.343 [2024-11-19 01:54:50.920213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83415 ] 00:13:40.601 [2024-11-19 01:54:51.059704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.601 [2024-11-19 01:54:51.081571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.601 [2024-11-19 01:54:51.112969] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:40.601 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:40.601 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:40.601 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.bLNyANlV1d 00:13:41.168 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:13:41.427 [2024-11-19 01:54:51.826491] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:41.427 [2024-11-19 01:54:51.837613] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:41.427 [2024-11-19 01:54:51.837651] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:41.427 [2024-11-19 01:54:51.837730] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:41.427 [2024-11-19 01:54:51.838295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dba00 (107): Transport endpoint is not connected 00:13:41.427 [2024-11-19 01:54:51.839286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9dba00 (9): Bad file descriptor 00:13:41.427 [2024-11-19 01:54:51.840283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:13:41.427 [2024-11-19 01:54:51.840306] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:41.427 [2024-11-19 01:54:51.840329] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:13:41.427 [2024-11-19 01:54:51.840343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:13:41.427 request: 00:13:41.427 { 00:13:41.427 "name": "TLSTEST", 00:13:41.427 "trtype": "tcp", 00:13:41.427 "traddr": "10.0.0.3", 00:13:41.427 "adrfam": "ipv4", 00:13:41.427 "trsvcid": "4420", 00:13:41.427 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:41.427 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:13:41.427 "prchk_reftag": false, 00:13:41.427 "prchk_guard": false, 00:13:41.427 "hdgst": false, 00:13:41.427 "ddgst": false, 00:13:41.427 "psk": "key0", 00:13:41.427 "allow_unrecognized_csi": false, 00:13:41.427 "method": "bdev_nvme_attach_controller", 00:13:41.427 "req_id": 1 00:13:41.427 } 00:13:41.427 Got JSON-RPC error response 00:13:41.427 response: 00:13:41.427 { 00:13:41.427 "code": -5, 00:13:41.427 "message": "Input/output error" 00:13:41.427 } 00:13:41.427 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83415 00:13:41.427 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83415 ']' 00:13:41.427 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83415 00:13:41.427 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:41.427 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:41.428 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83415 00:13:41.428 killing process with pid 83415 00:13:41.428 Received shutdown signal, test time was about 10.000000 seconds 00:13:41.428 00:13:41.428 Latency(us) 00:13:41.428 [2024-11-19T01:54:52.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.428 [2024-11-19T01:54:52.043Z] =================================================================================================================== 00:13:41.428 [2024-11-19T01:54:52.043Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:41.428 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:41.428 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:41.428 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83415' 00:13:41.428 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83415 00:13:41.428 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83415 00:13:41.428 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:41.428 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:41.428 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:41.428 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:41.428 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:41.428 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.bLNyANlV1d 00:13:41.428 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:41.428 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.bLNyANlV1d 00:13:41.428 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:41.428 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:41.428 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:41.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:41.428 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:41.428 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.bLNyANlV1d 00:13:41.428 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:41.428 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:13:41.428 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:41.428 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.bLNyANlV1d 00:13:41.428 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:41.428 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83436 00:13:41.428 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:41.428 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:41.428 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83436 /var/tmp/bdevperf.sock 00:13:41.428 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83436 ']' 00:13:41.428 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:41.428 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:41.428 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:41.428 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:41.428 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:41.687 [2024-11-19 01:54:52.076682] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:13:41.687 [2024-11-19 01:54:52.076791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83436 ] 00:13:41.687 [2024-11-19 01:54:52.222096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.687 [2024-11-19 01:54:52.244509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:41.687 [2024-11-19 01:54:52.276392] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:41.946 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:41.946 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:41.947 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.bLNyANlV1d 00:13:42.205 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:42.465 [2024-11-19 01:54:52.921514] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:42.465 [2024-11-19 01:54:52.927753] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:42.465 [2024-11-19 01:54:52.927993] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:42.465 [2024-11-19 01:54:52.928190] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spd[2024-11-19 01:54:52.928324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1856a00 (107): Transport endpoint is not connected 00:13:42.465 k_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:42.465 [2024-11-19 01:54:52.929315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1856a00 (9): Bad file descriptor 00:13:42.465 [2024-11-19 01:54:52.930313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:13:42.465 [2024-11-19 01:54:52.930490] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:42.465 [2024-11-19 01:54:52.930642] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:13:42.465 [2024-11-19 01:54:52.930797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:13:42.465 request: 00:13:42.465 { 00:13:42.465 "name": "TLSTEST", 00:13:42.465 "trtype": "tcp", 00:13:42.465 "traddr": "10.0.0.3", 00:13:42.465 "adrfam": "ipv4", 00:13:42.465 "trsvcid": "4420", 00:13:42.465 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:13:42.465 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:42.465 "prchk_reftag": false, 00:13:42.465 "prchk_guard": false, 00:13:42.465 "hdgst": false, 00:13:42.465 "ddgst": false, 00:13:42.465 "psk": "key0", 00:13:42.465 "allow_unrecognized_csi": false, 00:13:42.465 "method": "bdev_nvme_attach_controller", 00:13:42.465 "req_id": 1 00:13:42.465 } 00:13:42.465 Got JSON-RPC error response 00:13:42.465 response: 00:13:42.465 { 00:13:42.465 "code": -5, 00:13:42.465 "message": "Input/output error" 00:13:42.465 } 00:13:42.465 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83436 00:13:42.465 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83436 ']' 00:13:42.465 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83436 00:13:42.465 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:42.465 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:42.465 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83436 00:13:42.466 killing process with pid 83436 00:13:42.466 Received shutdown signal, test time was about 10.000000 seconds 00:13:42.466 00:13:42.466 Latency(us) 00:13:42.466 [2024-11-19T01:54:53.081Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.466 [2024-11-19T01:54:53.081Z] =================================================================================================================== 00:13:42.466 [2024-11-19T01:54:53.081Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:42.466 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:42.466 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:42.466 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83436' 00:13:42.466 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83436 00:13:42.466 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83436 00:13:42.726 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:42.726 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:42.726 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:42.726 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:42.726 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:42.726 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:42.726 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:42.726 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:42.726 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:42.726 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:42.726 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:42.726 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:42.726 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:42.726 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:42.726 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:42.726 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:42.726 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:13:42.726 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:42.726 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83456 00:13:42.726 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:42.726 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:42.726 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83456 /var/tmp/bdevperf.sock 00:13:42.726 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83456 ']' 00:13:42.726 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:42.726 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:42.726 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:42.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:42.726 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:42.726 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:42.726 [2024-11-19 01:54:53.173158] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:13:42.726 [2024-11-19 01:54:53.173410] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83456 ] 00:13:42.726 [2024-11-19 01:54:53.321414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.985 [2024-11-19 01:54:53.346582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:42.985 [2024-11-19 01:54:53.381102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:42.985 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:42.985 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:42.985 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:13:43.244 [2024-11-19 01:54:53.755617] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:13:43.244 [2024-11-19 01:54:53.755702] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:43.244 request: 00:13:43.244 { 00:13:43.244 "name": "key0", 00:13:43.244 "path": "", 00:13:43.244 "method": "keyring_file_add_key", 00:13:43.244 "req_id": 1 00:13:43.244 } 00:13:43.244 Got JSON-RPC error response 00:13:43.244 response: 00:13:43.244 { 00:13:43.244 "code": -1, 00:13:43.244 "message": "Operation not permitted" 00:13:43.244 } 00:13:43.244 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:43.503 [2024-11-19 01:54:54.087884] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:43.503 [2024-11-19 01:54:54.087968] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:13:43.503 request: 00:13:43.503 { 00:13:43.503 "name": "TLSTEST", 00:13:43.503 "trtype": "tcp", 00:13:43.503 "traddr": "10.0.0.3", 00:13:43.503 "adrfam": "ipv4", 00:13:43.503 "trsvcid": "4420", 00:13:43.503 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:43.503 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:43.503 "prchk_reftag": false, 00:13:43.503 "prchk_guard": false, 00:13:43.503 "hdgst": false, 00:13:43.503 "ddgst": false, 00:13:43.503 "psk": "key0", 00:13:43.503 "allow_unrecognized_csi": false, 00:13:43.503 "method": "bdev_nvme_attach_controller", 00:13:43.503 "req_id": 1 00:13:43.503 } 00:13:43.503 Got JSON-RPC error response 00:13:43.503 response: 00:13:43.503 { 00:13:43.503 "code": -126, 00:13:43.503 "message": "Required key not available" 00:13:43.503 } 00:13:43.503 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83456 00:13:43.503 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83456 ']' 00:13:43.503 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83456 00:13:43.503 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:43.503 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:43.503 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83456 00:13:43.763 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:43.763 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:43.763 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83456' 00:13:43.763 killing process with pid 83456 00:13:43.763 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83456 00:13:43.763 Received shutdown signal, test time was about 10.000000 seconds 00:13:43.763 00:13:43.763 Latency(us) 00:13:43.763 [2024-11-19T01:54:54.378Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.763 [2024-11-19T01:54:54.378Z] =================================================================================================================== 00:13:43.763 [2024-11-19T01:54:54.378Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:43.763 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83456 00:13:43.763 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:43.763 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:43.763 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:43.763 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:43.763 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:43.763 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 83031 00:13:43.763 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83031 ']' 00:13:43.763 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83031 00:13:43.763 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:43.763 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:43.763 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83031 00:13:43.763 killing process with pid 83031 00:13:43.763 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:43.763 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:43.763 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83031' 00:13:43.763 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83031 00:13:43.763 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83031 00:13:44.022 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:13:44.022 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:13:44.022 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:44.022 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:44.022 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:13:44.022 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:13:44.022 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:44.022 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:44.022 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:13:44.022 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.NIhhAywiM7 00:13:44.022 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:44.022 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.NIhhAywiM7 00:13:44.022 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:13:44.022 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:44.022 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:44.022 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:44.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.022 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83494 00:13:44.022 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83494 00:13:44.022 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:44.022 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83494 ']' 00:13:44.022 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.022 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:44.022 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.022 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:44.022 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:44.022 [2024-11-19 01:54:54.559256] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:13:44.022 [2024-11-19 01:54:54.559357] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.283 [2024-11-19 01:54:54.714349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.283 [2024-11-19 01:54:54.738273] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.283 [2024-11-19 01:54:54.738594] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.283 [2024-11-19 01:54:54.738621] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:44.283 [2024-11-19 01:54:54.738631] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:44.283 [2024-11-19 01:54:54.738642] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.283 [2024-11-19 01:54:54.739015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.283 [2024-11-19 01:54:54.774029] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:44.283 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:44.283 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:44.283 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:44.283 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:44.283 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:44.283 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.283 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.NIhhAywiM7 00:13:44.283 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.NIhhAywiM7 00:13:44.283 01:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:44.543 [2024-11-19 01:54:55.125242] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:44.543 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:45.110 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:45.370 [2024-11-19 01:54:55.773555] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:45.370 [2024-11-19 01:54:55.773829] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:45.370 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:45.644 malloc0 00:13:45.644 01:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:45.941 01:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.NIhhAywiM7 00:13:46.200 01:54:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:46.459 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NIhhAywiM7 00:13:46.459 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:46.459 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:46.459 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:46.459 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.NIhhAywiM7 00:13:46.459 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:46.459 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83542 00:13:46.459 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:46.459 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:46.459 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83542 /var/tmp/bdevperf.sock 00:13:46.459 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83542 ']' 00:13:46.459 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:46.459 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:46.459 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:46.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:46.459 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:46.459 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:46.718 [2024-11-19 01:54:57.101195] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:13:46.718 [2024-11-19 01:54:57.101515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83542 ] 00:13:46.718 [2024-11-19 01:54:57.253213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.718 [2024-11-19 01:54:57.279334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:46.718 [2024-11-19 01:54:57.314112] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:46.978 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:46.978 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:46.978 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NIhhAywiM7 00:13:47.237 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:47.495 [2024-11-19 01:54:57.892603] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:47.495 TLSTESTn1 00:13:47.495 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:47.755 Running I/O for 10 seconds... 00:13:49.628 4160.00 IOPS, 16.25 MiB/s [2024-11-19T01:55:01.180Z] 4191.50 IOPS, 16.37 MiB/s [2024-11-19T01:55:02.558Z] 4192.33 IOPS, 16.38 MiB/s [2024-11-19T01:55:03.125Z] 4192.00 IOPS, 16.38 MiB/s [2024-11-19T01:55:04.498Z] 4213.00 IOPS, 16.46 MiB/s [2024-11-19T01:55:05.435Z] 4084.17 IOPS, 15.95 MiB/s [2024-11-19T01:55:06.372Z] 4110.57 IOPS, 16.06 MiB/s [2024-11-19T01:55:07.309Z] 4115.50 IOPS, 16.08 MiB/s [2024-11-19T01:55:08.247Z] 4132.89 IOPS, 16.14 MiB/s [2024-11-19T01:55:08.247Z] 4160.60 IOPS, 16.25 MiB/s 00:13:57.632 Latency(us) 00:13:57.632 [2024-11-19T01:55:08.247Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.632 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:57.632 Verification LBA range: start 0x0 length 0x2000 00:13:57.632 TLSTESTn1 : 10.01 4167.31 16.28 0.00 0.00 30663.66 4825.83 27882.59 00:13:57.632 [2024-11-19T01:55:08.247Z] =================================================================================================================== 00:13:57.632 [2024-11-19T01:55:08.247Z] Total : 4167.31 16.28 0.00 0.00 30663.66 4825.83 27882.59 00:13:57.632 { 00:13:57.632 "results": [ 00:13:57.632 { 00:13:57.632 "job": "TLSTESTn1", 00:13:57.632 "core_mask": "0x4", 00:13:57.632 "workload": "verify", 00:13:57.632 "status": "finished", 00:13:57.632 "verify_range": { 00:13:57.632 "start": 0, 00:13:57.632 "length": 8192 00:13:57.632 }, 00:13:57.632 "queue_depth": 128, 00:13:57.632 "io_size": 4096, 00:13:57.632 "runtime": 10.014124, 00:13:57.632 "iops": 4167.314085585519, 00:13:57.632 "mibps": 16.278570646818434, 00:13:57.632 "io_failed": 0, 00:13:57.632 "io_timeout": 0, 00:13:57.632 "avg_latency_us": 30663.657916227356, 00:13:57.632 "min_latency_us": 4825.832727272727, 00:13:57.632 "max_latency_us": 27882.589090909092 00:13:57.632 } 00:13:57.632 ], 00:13:57.632 "core_count": 1 00:13:57.632 } 00:13:57.632 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:57.632 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 83542 00:13:57.632 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83542 ']' 00:13:57.632 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83542 00:13:57.632 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:57.632 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:57.632 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83542 00:13:57.632 killing process with pid 83542 00:13:57.632 Received shutdown signal, test time was about 10.000000 seconds 00:13:57.632 00:13:57.632 Latency(us) 00:13:57.632 [2024-11-19T01:55:08.247Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.632 [2024-11-19T01:55:08.247Z] =================================================================================================================== 00:13:57.632 [2024-11-19T01:55:08.247Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:57.632 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:57.632 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:57.632 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83542' 00:13:57.632 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83542 00:13:57.632 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83542 00:13:57.891 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.NIhhAywiM7 00:13:57.891 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NIhhAywiM7 00:13:57.891 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:57.891 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NIhhAywiM7 00:13:57.891 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:13:57.891 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:57.891 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:13:57.891 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:57.891 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NIhhAywiM7 00:13:57.891 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:57.891 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:57.891 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:57.891 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.NIhhAywiM7 00:13:57.891 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:57.891 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83670 00:13:57.891 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:57.891 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:57.891 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83670 /var/tmp/bdevperf.sock 00:13:57.891 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83670 ']' 00:13:57.891 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:57.891 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:57.891 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:57.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:57.891 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:57.891 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:57.891 [2024-11-19 01:55:08.372709] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:13:57.891 [2024-11-19 01:55:08.373184] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83670 ] 00:13:58.151 [2024-11-19 01:55:08.521263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.151 [2024-11-19 01:55:08.543545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:58.151 [2024-11-19 01:55:08.576057] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:58.151 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:58.151 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:58.151 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NIhhAywiM7 00:13:58.410 [2024-11-19 01:55:08.925589] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.NIhhAywiM7': 0100666 00:13:58.410 [2024-11-19 01:55:08.925836] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:58.410 request: 00:13:58.410 { 00:13:58.410 "name": "key0", 00:13:58.410 "path": "/tmp/tmp.NIhhAywiM7", 00:13:58.410 "method": "keyring_file_add_key", 00:13:58.410 "req_id": 1 00:13:58.410 } 00:13:58.410 Got JSON-RPC error response 00:13:58.410 response: 00:13:58.410 { 00:13:58.410 "code": -1, 00:13:58.410 "message": "Operation not permitted" 00:13:58.410 } 00:13:58.410 01:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:58.680 [2024-11-19 01:55:09.221751] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:58.680 [2024-11-19 01:55:09.222209] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:13:58.680 request: 00:13:58.680 { 00:13:58.680 "name": "TLSTEST", 00:13:58.680 "trtype": "tcp", 00:13:58.680 "traddr": "10.0.0.3", 00:13:58.680 "adrfam": "ipv4", 00:13:58.680 "trsvcid": "4420", 00:13:58.680 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:58.680 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:58.680 "prchk_reftag": false, 00:13:58.680 "prchk_guard": false, 00:13:58.680 "hdgst": false, 00:13:58.680 "ddgst": false, 00:13:58.680 "psk": "key0", 00:13:58.680 "allow_unrecognized_csi": false, 00:13:58.680 "method": "bdev_nvme_attach_controller", 00:13:58.680 "req_id": 1 00:13:58.680 } 00:13:58.680 Got JSON-RPC error response 00:13:58.680 response: 00:13:58.680 { 00:13:58.680 "code": -126, 00:13:58.680 "message": "Required key not available" 00:13:58.680 } 00:13:58.680 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83670 00:13:58.680 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83670 ']' 00:13:58.680 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83670 00:13:58.680 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:58.680 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:58.680 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83670 00:13:58.681 killing process with pid 83670 00:13:58.681 Received shutdown signal, test time was about 10.000000 seconds 00:13:58.681 00:13:58.681 Latency(us) 00:13:58.681 [2024-11-19T01:55:09.296Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.681 [2024-11-19T01:55:09.296Z] =================================================================================================================== 00:13:58.681 [2024-11-19T01:55:09.296Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:58.681 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:13:58.681 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:13:58.681 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83670' 00:13:58.681 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83670 00:13:58.681 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83670 00:13:58.986 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:58.986 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:13:58.986 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:58.986 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:58.986 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:58.986 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 83494 00:13:58.986 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83494 ']' 00:13:58.986 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83494 00:13:58.986 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:13:58.986 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:58.986 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83494 00:13:58.986 killing process with pid 83494 00:13:58.986 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:58.986 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:58.986 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83494' 00:13:58.986 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83494 00:13:58.986 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83494 00:13:58.986 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:13:58.986 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:58.986 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:58.986 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:58.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.986 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83696 00:13:58.986 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:58.986 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83696 00:13:58.986 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83696 ']' 00:13:58.986 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.986 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:58.986 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.986 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:58.986 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:59.246 [2024-11-19 01:55:09.624208] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:13:59.246 [2024-11-19 01:55:09.624467] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:59.246 [2024-11-19 01:55:09.766079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.246 [2024-11-19 01:55:09.785296] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:59.246 [2024-11-19 01:55:09.785612] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:59.246 [2024-11-19 01:55:09.785757] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:59.246 [2024-11-19 01:55:09.785906] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:59.246 [2024-11-19 01:55:09.785942] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:59.246 [2024-11-19 01:55:09.786317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.246 [2024-11-19 01:55:09.815223] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:59.246 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:59.246 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:59.246 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:59.246 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:59.246 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:59.505 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:59.505 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.NIhhAywiM7 00:13:59.505 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:13:59.505 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.NIhhAywiM7 00:13:59.505 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:13:59.505 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:59.505 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:13:59.505 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:59.505 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.NIhhAywiM7 00:13:59.505 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.NIhhAywiM7 00:13:59.505 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:59.763 [2024-11-19 01:55:10.134959] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:59.764 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:00.022 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:00.022 [2024-11-19 01:55:10.611061] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:00.022 [2024-11-19 01:55:10.611577] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:00.022 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:00.590 malloc0 00:14:00.590 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:00.590 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.NIhhAywiM7 00:14:00.850 [2024-11-19 01:55:11.393748] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.NIhhAywiM7': 0100666 00:14:00.850 [2024-11-19 01:55:11.393993] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:00.850 request: 00:14:00.850 { 00:14:00.850 "name": "key0", 00:14:00.850 "path": "/tmp/tmp.NIhhAywiM7", 00:14:00.850 "method": "keyring_file_add_key", 00:14:00.850 "req_id": 1 00:14:00.850 } 00:14:00.850 Got JSON-RPC error response 00:14:00.850 response: 00:14:00.850 { 00:14:00.850 "code": -1, 00:14:00.850 "message": "Operation not permitted" 00:14:00.850 } 00:14:00.850 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:01.110 [2024-11-19 01:55:11.633887] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:14:01.110 [2024-11-19 01:55:11.634166] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:14:01.110 request: 00:14:01.110 { 00:14:01.110 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:01.110 "host": "nqn.2016-06.io.spdk:host1", 00:14:01.110 "psk": "key0", 00:14:01.110 "method": "nvmf_subsystem_add_host", 00:14:01.110 "req_id": 1 00:14:01.110 } 00:14:01.110 Got JSON-RPC error response 00:14:01.110 response: 00:14:01.110 { 00:14:01.110 "code": -32603, 00:14:01.110 "message": "Internal error" 00:14:01.110 } 00:14:01.110 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:01.110 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:01.110 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:01.110 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:01.110 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 83696 00:14:01.110 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83696 ']' 00:14:01.110 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83696 00:14:01.110 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:01.110 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:01.110 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83696 00:14:01.110 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:01.110 killing process with pid 83696 00:14:01.110 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:01.110 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83696' 00:14:01.110 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83696 00:14:01.110 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83696 00:14:01.370 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.NIhhAywiM7 00:14:01.370 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:14:01.370 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:01.370 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:01.370 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:01.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.370 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83758 00:14:01.370 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:01.370 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83758 00:14:01.370 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83758 ']' 00:14:01.370 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.370 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:01.370 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.370 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:01.370 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:01.370 [2024-11-19 01:55:11.886402] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:14:01.370 [2024-11-19 01:55:11.886778] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:01.630 [2024-11-19 01:55:12.035676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.630 [2024-11-19 01:55:12.054886] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:01.630 [2024-11-19 01:55:12.055131] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:01.630 [2024-11-19 01:55:12.055270] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:01.630 [2024-11-19 01:55:12.055483] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:01.630 [2024-11-19 01:55:12.055641] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:01.630 [2024-11-19 01:55:12.055969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.630 [2024-11-19 01:55:12.083814] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:01.630 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:01.630 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:01.630 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:01.630 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:01.630 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:01.630 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:01.630 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.NIhhAywiM7 00:14:01.630 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.NIhhAywiM7 00:14:01.630 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:01.888 [2024-11-19 01:55:12.428143] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:01.888 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:02.147 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:02.406 [2024-11-19 01:55:12.964222] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:02.406 [2024-11-19 01:55:12.964613] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:02.406 01:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:02.665 malloc0 00:14:02.665 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:02.923 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.NIhhAywiM7 00:14:03.182 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:03.441 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:03.441 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=83807 00:14:03.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:03.441 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:03.441 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 83807 /var/tmp/bdevperf.sock 00:14:03.441 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83807 ']' 00:14:03.441 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:03.441 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:03.441 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:03.441 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:03.441 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:03.441 [2024-11-19 01:55:14.038454] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:14:03.441 [2024-11-19 01:55:14.038757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83807 ] 00:14:03.699 [2024-11-19 01:55:14.186349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.699 [2024-11-19 01:55:14.211319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:03.699 [2024-11-19 01:55:14.246441] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:03.958 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:03.958 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:03.958 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NIhhAywiM7 00:14:03.958 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:04.217 [2024-11-19 01:55:14.793496] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:04.475 TLSTESTn1 00:14:04.475 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:04.734 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:14:04.734 "subsystems": [ 00:14:04.734 { 00:14:04.734 "subsystem": "keyring", 00:14:04.734 "config": [ 00:14:04.734 { 00:14:04.734 "method": "keyring_file_add_key", 00:14:04.734 "params": { 00:14:04.734 "name": "key0", 00:14:04.734 "path": "/tmp/tmp.NIhhAywiM7" 00:14:04.734 } 00:14:04.734 } 00:14:04.734 ] 00:14:04.734 }, 00:14:04.734 { 00:14:04.734 "subsystem": "iobuf", 00:14:04.734 "config": [ 00:14:04.734 { 00:14:04.734 "method": "iobuf_set_options", 00:14:04.734 "params": { 00:14:04.734 "small_pool_count": 8192, 00:14:04.734 "large_pool_count": 1024, 00:14:04.734 "small_bufsize": 8192, 00:14:04.734 "large_bufsize": 135168, 00:14:04.734 "enable_numa": false 00:14:04.734 } 00:14:04.734 } 00:14:04.734 ] 00:14:04.734 }, 00:14:04.734 { 00:14:04.734 "subsystem": "sock", 00:14:04.734 "config": [ 00:14:04.734 { 00:14:04.734 "method": "sock_set_default_impl", 00:14:04.734 "params": { 00:14:04.734 "impl_name": "uring" 00:14:04.734 } 00:14:04.734 }, 00:14:04.734 { 00:14:04.734 "method": "sock_impl_set_options", 00:14:04.734 "params": { 00:14:04.734 "impl_name": "ssl", 00:14:04.734 "recv_buf_size": 4096, 00:14:04.734 "send_buf_size": 4096, 00:14:04.734 "enable_recv_pipe": true, 00:14:04.734 "enable_quickack": false, 00:14:04.734 "enable_placement_id": 0, 00:14:04.734 "enable_zerocopy_send_server": true, 00:14:04.734 "enable_zerocopy_send_client": false, 00:14:04.734 "zerocopy_threshold": 0, 00:14:04.734 "tls_version": 0, 00:14:04.734 "enable_ktls": false 00:14:04.734 } 00:14:04.734 }, 00:14:04.734 { 00:14:04.734 "method": "sock_impl_set_options", 00:14:04.734 "params": { 00:14:04.734 "impl_name": "posix", 00:14:04.734 "recv_buf_size": 2097152, 00:14:04.734 "send_buf_size": 2097152, 00:14:04.734 "enable_recv_pipe": true, 00:14:04.734 "enable_quickack": false, 00:14:04.734 "enable_placement_id": 0, 00:14:04.734 "enable_zerocopy_send_server": true, 00:14:04.734 "enable_zerocopy_send_client": false, 00:14:04.734 "zerocopy_threshold": 0, 00:14:04.734 "tls_version": 0, 00:14:04.734 "enable_ktls": false 00:14:04.734 } 00:14:04.734 }, 00:14:04.734 { 00:14:04.734 "method": "sock_impl_set_options", 00:14:04.734 "params": { 00:14:04.734 "impl_name": "uring", 00:14:04.734 "recv_buf_size": 2097152, 00:14:04.734 "send_buf_size": 2097152, 00:14:04.734 "enable_recv_pipe": true, 00:14:04.734 "enable_quickack": false, 00:14:04.734 "enable_placement_id": 0, 00:14:04.734 "enable_zerocopy_send_server": false, 00:14:04.734 "enable_zerocopy_send_client": false, 00:14:04.734 "zerocopy_threshold": 0, 00:14:04.734 "tls_version": 0, 00:14:04.734 "enable_ktls": false 00:14:04.734 } 00:14:04.734 } 00:14:04.734 ] 00:14:04.734 }, 00:14:04.734 { 00:14:04.734 "subsystem": "vmd", 00:14:04.734 "config": [] 00:14:04.734 }, 00:14:04.734 { 00:14:04.734 "subsystem": "accel", 00:14:04.734 "config": [ 00:14:04.734 { 00:14:04.734 "method": "accel_set_options", 00:14:04.734 "params": { 00:14:04.734 "small_cache_size": 128, 00:14:04.734 "large_cache_size": 16, 00:14:04.734 "task_count": 2048, 00:14:04.734 "sequence_count": 2048, 00:14:04.734 "buf_count": 2048 00:14:04.734 } 00:14:04.734 } 00:14:04.734 ] 00:14:04.734 }, 00:14:04.734 { 00:14:04.734 "subsystem": "bdev", 00:14:04.734 "config": [ 00:14:04.734 { 00:14:04.734 "method": "bdev_set_options", 00:14:04.734 "params": { 00:14:04.734 "bdev_io_pool_size": 65535, 00:14:04.734 "bdev_io_cache_size": 256, 00:14:04.735 "bdev_auto_examine": true, 00:14:04.735 "iobuf_small_cache_size": 128, 00:14:04.735 "iobuf_large_cache_size": 16 00:14:04.735 } 00:14:04.735 }, 00:14:04.735 { 00:14:04.735 "method": "bdev_raid_set_options", 00:14:04.735 "params": { 00:14:04.735 "process_window_size_kb": 1024, 00:14:04.735 "process_max_bandwidth_mb_sec": 0 00:14:04.735 } 00:14:04.735 }, 00:14:04.735 { 00:14:04.735 "method": "bdev_iscsi_set_options", 00:14:04.735 "params": { 00:14:04.735 "timeout_sec": 30 00:14:04.735 } 00:14:04.735 }, 00:14:04.735 { 00:14:04.735 "method": "bdev_nvme_set_options", 00:14:04.735 "params": { 00:14:04.735 "action_on_timeout": "none", 00:14:04.735 "timeout_us": 0, 00:14:04.735 "timeout_admin_us": 0, 00:14:04.735 "keep_alive_timeout_ms": 10000, 00:14:04.735 "arbitration_burst": 0, 00:14:04.735 "low_priority_weight": 0, 00:14:04.735 "medium_priority_weight": 0, 00:14:04.735 "high_priority_weight": 0, 00:14:04.735 "nvme_adminq_poll_period_us": 10000, 00:14:04.735 "nvme_ioq_poll_period_us": 0, 00:14:04.735 "io_queue_requests": 0, 00:14:04.735 "delay_cmd_submit": true, 00:14:04.735 "transport_retry_count": 4, 00:14:04.735 "bdev_retry_count": 3, 00:14:04.735 "transport_ack_timeout": 0, 00:14:04.735 "ctrlr_loss_timeout_sec": 0, 00:14:04.735 "reconnect_delay_sec": 0, 00:14:04.735 "fast_io_fail_timeout_sec": 0, 00:14:04.735 "disable_auto_failback": false, 00:14:04.735 "generate_uuids": false, 00:14:04.735 "transport_tos": 0, 00:14:04.735 "nvme_error_stat": false, 00:14:04.735 "rdma_srq_size": 0, 00:14:04.735 "io_path_stat": false, 00:14:04.735 "allow_accel_sequence": false, 00:14:04.735 "rdma_max_cq_size": 0, 00:14:04.735 "rdma_cm_event_timeout_ms": 0, 00:14:04.735 "dhchap_digests": [ 00:14:04.735 "sha256", 00:14:04.735 "sha384", 00:14:04.735 "sha512" 00:14:04.735 ], 00:14:04.735 "dhchap_dhgroups": [ 00:14:04.735 "null", 00:14:04.735 "ffdhe2048", 00:14:04.735 "ffdhe3072", 00:14:04.735 "ffdhe4096", 00:14:04.735 "ffdhe6144", 00:14:04.735 "ffdhe8192" 00:14:04.735 ] 00:14:04.735 } 00:14:04.735 }, 00:14:04.735 { 00:14:04.735 "method": "bdev_nvme_set_hotplug", 00:14:04.735 "params": { 00:14:04.735 "period_us": 100000, 00:14:04.735 "enable": false 00:14:04.735 } 00:14:04.735 }, 00:14:04.735 { 00:14:04.735 "method": "bdev_malloc_create", 00:14:04.735 "params": { 00:14:04.735 "name": "malloc0", 00:14:04.735 "num_blocks": 8192, 00:14:04.735 "block_size": 4096, 00:14:04.735 "physical_block_size": 4096, 00:14:04.735 "uuid": "231455bf-b62e-4a95-a37c-c93028dbf959", 00:14:04.735 "optimal_io_boundary": 0, 00:14:04.735 "md_size": 0, 00:14:04.735 "dif_type": 0, 00:14:04.735 "dif_is_head_of_md": false, 00:14:04.735 "dif_pi_format": 0 00:14:04.735 } 00:14:04.735 }, 00:14:04.735 { 00:14:04.735 "method": "bdev_wait_for_examine" 00:14:04.735 } 00:14:04.735 ] 00:14:04.735 }, 00:14:04.735 { 00:14:04.735 "subsystem": "nbd", 00:14:04.735 "config": [] 00:14:04.735 }, 00:14:04.735 { 00:14:04.735 "subsystem": "scheduler", 00:14:04.735 "config": [ 00:14:04.735 { 00:14:04.735 "method": "framework_set_scheduler", 00:14:04.735 "params": { 00:14:04.735 "name": "static" 00:14:04.735 } 00:14:04.735 } 00:14:04.735 ] 00:14:04.735 }, 00:14:04.735 { 00:14:04.735 "subsystem": "nvmf", 00:14:04.735 "config": [ 00:14:04.735 { 00:14:04.735 "method": "nvmf_set_config", 00:14:04.735 "params": { 00:14:04.735 "discovery_filter": "match_any", 00:14:04.735 "admin_cmd_passthru": { 00:14:04.735 "identify_ctrlr": false 00:14:04.735 }, 00:14:04.735 "dhchap_digests": [ 00:14:04.735 "sha256", 00:14:04.735 "sha384", 00:14:04.735 "sha512" 00:14:04.735 ], 00:14:04.735 "dhchap_dhgroups": [ 00:14:04.735 "null", 00:14:04.735 "ffdhe2048", 00:14:04.735 "ffdhe3072", 00:14:04.735 "ffdhe4096", 00:14:04.735 "ffdhe6144", 00:14:04.735 "ffdhe8192" 00:14:04.735 ] 00:14:04.735 } 00:14:04.735 }, 00:14:04.735 { 00:14:04.735 "method": "nvmf_set_max_subsystems", 00:14:04.735 "params": { 00:14:04.735 "max_subsystems": 1024 00:14:04.735 } 00:14:04.735 }, 00:14:04.735 { 00:14:04.735 "method": "nvmf_set_crdt", 00:14:04.735 "params": { 00:14:04.735 "crdt1": 0, 00:14:04.735 "crdt2": 0, 00:14:04.735 "crdt3": 0 00:14:04.735 } 00:14:04.735 }, 00:14:04.735 { 00:14:04.735 "method": "nvmf_create_transport", 00:14:04.735 "params": { 00:14:04.735 "trtype": "TCP", 00:14:04.735 "max_queue_depth": 128, 00:14:04.735 "max_io_qpairs_per_ctrlr": 127, 00:14:04.735 "in_capsule_data_size": 4096, 00:14:04.735 "max_io_size": 131072, 00:14:04.735 "io_unit_size": 131072, 00:14:04.735 "max_aq_depth": 128, 00:14:04.735 "num_shared_buffers": 511, 00:14:04.735 "buf_cache_size": 4294967295, 00:14:04.735 "dif_insert_or_strip": false, 00:14:04.735 "zcopy": false, 00:14:04.735 "c2h_success": false, 00:14:04.735 "sock_priority": 0, 00:14:04.735 "abort_timeout_sec": 1, 00:14:04.735 "ack_timeout": 0, 00:14:04.735 "data_wr_pool_size": 0 00:14:04.735 } 00:14:04.735 }, 00:14:04.735 { 00:14:04.735 "method": "nvmf_create_subsystem", 00:14:04.735 "params": { 00:14:04.735 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:04.735 "allow_any_host": false, 00:14:04.735 "serial_number": "SPDK00000000000001", 00:14:04.735 "model_number": "SPDK bdev Controller", 00:14:04.735 "max_namespaces": 10, 00:14:04.735 "min_cntlid": 1, 00:14:04.735 "max_cntlid": 65519, 00:14:04.735 "ana_reporting": false 00:14:04.735 } 00:14:04.735 }, 00:14:04.735 { 00:14:04.735 "method": "nvmf_subsystem_add_host", 00:14:04.735 "params": { 00:14:04.735 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:04.735 "host": "nqn.2016-06.io.spdk:host1", 00:14:04.735 "psk": "key0" 00:14:04.735 } 00:14:04.735 }, 00:14:04.735 { 00:14:04.735 "method": "nvmf_subsystem_add_ns", 00:14:04.735 "params": { 00:14:04.735 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:04.735 "namespace": { 00:14:04.735 "nsid": 1, 00:14:04.735 "bdev_name": "malloc0", 00:14:04.735 "nguid": "231455BFB62E4A95A37CC93028DBF959", 00:14:04.735 "uuid": "231455bf-b62e-4a95-a37c-c93028dbf959", 00:14:04.735 "no_auto_visible": false 00:14:04.735 } 00:14:04.735 } 00:14:04.735 }, 00:14:04.735 { 00:14:04.735 "method": "nvmf_subsystem_add_listener", 00:14:04.735 "params": { 00:14:04.735 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:04.735 "listen_address": { 00:14:04.735 "trtype": "TCP", 00:14:04.735 "adrfam": "IPv4", 00:14:04.735 "traddr": "10.0.0.3", 00:14:04.735 "trsvcid": "4420" 00:14:04.735 }, 00:14:04.735 "secure_channel": true 00:14:04.735 } 00:14:04.735 } 00:14:04.735 ] 00:14:04.735 } 00:14:04.735 ] 00:14:04.735 }' 00:14:04.735 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:04.995 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:14:04.995 "subsystems": [ 00:14:04.995 { 00:14:04.995 "subsystem": "keyring", 00:14:04.995 "config": [ 00:14:04.995 { 00:14:04.995 "method": "keyring_file_add_key", 00:14:04.995 "params": { 00:14:04.995 "name": "key0", 00:14:04.995 "path": "/tmp/tmp.NIhhAywiM7" 00:14:04.995 } 00:14:04.995 } 00:14:04.995 ] 00:14:04.995 }, 00:14:04.995 { 00:14:04.995 "subsystem": "iobuf", 00:14:04.995 "config": [ 00:14:04.995 { 00:14:04.995 "method": "iobuf_set_options", 00:14:04.995 "params": { 00:14:04.995 "small_pool_count": 8192, 00:14:04.995 "large_pool_count": 1024, 00:14:04.995 "small_bufsize": 8192, 00:14:04.995 "large_bufsize": 135168, 00:14:04.995 "enable_numa": false 00:14:04.995 } 00:14:04.995 } 00:14:04.995 ] 00:14:04.995 }, 00:14:04.995 { 00:14:04.995 "subsystem": "sock", 00:14:04.995 "config": [ 00:14:04.995 { 00:14:04.995 "method": "sock_set_default_impl", 00:14:04.995 "params": { 00:14:04.995 "impl_name": "uring" 00:14:04.995 } 00:14:04.995 }, 00:14:04.995 { 00:14:04.995 "method": "sock_impl_set_options", 00:14:04.995 "params": { 00:14:04.995 "impl_name": "ssl", 00:14:04.995 "recv_buf_size": 4096, 00:14:04.995 "send_buf_size": 4096, 00:14:04.995 "enable_recv_pipe": true, 00:14:04.995 "enable_quickack": false, 00:14:04.995 "enable_placement_id": 0, 00:14:04.995 "enable_zerocopy_send_server": true, 00:14:04.995 "enable_zerocopy_send_client": false, 00:14:04.995 "zerocopy_threshold": 0, 00:14:04.995 "tls_version": 0, 00:14:04.995 "enable_ktls": false 00:14:04.995 } 00:14:04.995 }, 00:14:04.995 { 00:14:04.995 "method": "sock_impl_set_options", 00:14:04.995 "params": { 00:14:04.995 "impl_name": "posix", 00:14:04.995 "recv_buf_size": 2097152, 00:14:04.995 "send_buf_size": 2097152, 00:14:04.995 "enable_recv_pipe": true, 00:14:04.995 "enable_quickack": false, 00:14:04.995 "enable_placement_id": 0, 00:14:04.995 "enable_zerocopy_send_server": true, 00:14:04.995 "enable_zerocopy_send_client": false, 00:14:04.995 "zerocopy_threshold": 0, 00:14:04.995 "tls_version": 0, 00:14:04.995 "enable_ktls": false 00:14:04.995 } 00:14:04.995 }, 00:14:04.995 { 00:14:04.995 "method": "sock_impl_set_options", 00:14:04.995 "params": { 00:14:04.995 "impl_name": "uring", 00:14:04.995 "recv_buf_size": 2097152, 00:14:04.995 "send_buf_size": 2097152, 00:14:04.995 "enable_recv_pipe": true, 00:14:04.995 "enable_quickack": false, 00:14:04.995 "enable_placement_id": 0, 00:14:04.995 "enable_zerocopy_send_server": false, 00:14:04.995 "enable_zerocopy_send_client": false, 00:14:04.995 "zerocopy_threshold": 0, 00:14:04.995 "tls_version": 0, 00:14:04.995 "enable_ktls": false 00:14:04.995 } 00:14:04.995 } 00:14:04.995 ] 00:14:04.995 }, 00:14:04.995 { 00:14:04.995 "subsystem": "vmd", 00:14:04.995 "config": [] 00:14:04.995 }, 00:14:04.995 { 00:14:04.995 "subsystem": "accel", 00:14:04.995 "config": [ 00:14:04.995 { 00:14:04.995 "method": "accel_set_options", 00:14:04.995 "params": { 00:14:04.995 "small_cache_size": 128, 00:14:04.995 "large_cache_size": 16, 00:14:04.995 "task_count": 2048, 00:14:04.995 "sequence_count": 2048, 00:14:04.995 "buf_count": 2048 00:14:04.995 } 00:14:04.995 } 00:14:04.995 ] 00:14:04.995 }, 00:14:04.995 { 00:14:04.995 "subsystem": "bdev", 00:14:04.995 "config": [ 00:14:04.995 { 00:14:04.995 "method": "bdev_set_options", 00:14:04.995 "params": { 00:14:04.995 "bdev_io_pool_size": 65535, 00:14:04.995 "bdev_io_cache_size": 256, 00:14:04.995 "bdev_auto_examine": true, 00:14:04.995 "iobuf_small_cache_size": 128, 00:14:04.995 "iobuf_large_cache_size": 16 00:14:04.995 } 00:14:04.995 }, 00:14:04.995 { 00:14:04.995 "method": "bdev_raid_set_options", 00:14:04.995 "params": { 00:14:04.995 "process_window_size_kb": 1024, 00:14:04.995 "process_max_bandwidth_mb_sec": 0 00:14:04.995 } 00:14:04.995 }, 00:14:04.995 { 00:14:04.995 "method": "bdev_iscsi_set_options", 00:14:04.995 "params": { 00:14:04.995 "timeout_sec": 30 00:14:04.995 } 00:14:04.995 }, 00:14:04.996 { 00:14:04.996 "method": "bdev_nvme_set_options", 00:14:04.996 "params": { 00:14:04.996 "action_on_timeout": "none", 00:14:04.996 "timeout_us": 0, 00:14:04.996 "timeout_admin_us": 0, 00:14:04.996 "keep_alive_timeout_ms": 10000, 00:14:04.996 "arbitration_burst": 0, 00:14:04.996 "low_priority_weight": 0, 00:14:04.996 "medium_priority_weight": 0, 00:14:04.996 "high_priority_weight": 0, 00:14:04.996 "nvme_adminq_poll_period_us": 10000, 00:14:04.996 "nvme_ioq_poll_period_us": 0, 00:14:04.996 "io_queue_requests": 512, 00:14:04.996 "delay_cmd_submit": true, 00:14:04.996 "transport_retry_count": 4, 00:14:04.996 "bdev_retry_count": 3, 00:14:04.996 "transport_ack_timeout": 0, 00:14:04.996 "ctrlr_loss_timeout_sec": 0, 00:14:04.996 "reconnect_delay_sec": 0, 00:14:04.996 "fast_io_fail_timeout_sec": 0, 00:14:04.996 "disable_auto_failback": false, 00:14:04.996 "generate_uuids": false, 00:14:04.996 "transport_tos": 0, 00:14:04.996 "nvme_error_stat": false, 00:14:04.996 "rdma_srq_size": 0, 00:14:04.996 "io_path_stat": false, 00:14:04.996 "allow_accel_sequence": false, 00:14:04.996 "rdma_max_cq_size": 0, 00:14:04.996 "rdma_cm_event_timeout_ms": 0, 00:14:04.996 "dhchap_digests": [ 00:14:04.996 "sha256", 00:14:04.996 "sha384", 00:14:04.996 "sha512" 00:14:04.996 ], 00:14:04.996 "dhchap_dhgroups": [ 00:14:04.996 "null", 00:14:04.996 "ffdhe2048", 00:14:04.996 "ffdhe3072", 00:14:04.996 "ffdhe4096", 00:14:04.996 "ffdhe6144", 00:14:04.996 "ffdhe8192" 00:14:04.996 ] 00:14:04.996 } 00:14:04.996 }, 00:14:04.996 { 00:14:04.996 "method": "bdev_nvme_attach_controller", 00:14:04.996 "params": { 00:14:04.996 "name": "TLSTEST", 00:14:04.996 "trtype": "TCP", 00:14:04.996 "adrfam": "IPv4", 00:14:04.996 "traddr": "10.0.0.3", 00:14:04.996 "trsvcid": "4420", 00:14:04.996 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:04.996 "prchk_reftag": false, 00:14:04.996 "prchk_guard": false, 00:14:04.996 "ctrlr_loss_timeout_sec": 0, 00:14:04.996 "reconnect_delay_sec": 0, 00:14:04.996 "fast_io_fail_timeout_sec": 0, 00:14:04.996 "psk": "key0", 00:14:04.996 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:04.996 "hdgst": false, 00:14:04.996 "ddgst": false, 00:14:04.996 "multipath": "multipath" 00:14:04.996 } 00:14:04.996 }, 00:14:04.996 { 00:14:04.996 "method": "bdev_nvme_set_hotplug", 00:14:04.996 "params": { 00:14:04.996 "period_us": 100000, 00:14:04.996 "enable": false 00:14:04.996 } 00:14:04.996 }, 00:14:04.996 { 00:14:04.996 "method": "bdev_wait_for_examine" 00:14:04.996 } 00:14:04.996 ] 00:14:04.996 }, 00:14:04.996 { 00:14:04.996 "subsystem": "nbd", 00:14:04.996 "config": [] 00:14:04.996 } 00:14:04.996 ] 00:14:04.996 }' 00:14:04.996 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 83807 00:14:04.996 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83807 ']' 00:14:04.996 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83807 00:14:04.996 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:04.996 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:04.996 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83807 00:14:05.255 killing process with pid 83807 00:14:05.255 Received shutdown signal, test time was about 10.000000 seconds 00:14:05.255 00:14:05.255 Latency(us) 00:14:05.255 [2024-11-19T01:55:15.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:05.255 [2024-11-19T01:55:15.870Z] =================================================================================================================== 00:14:05.255 [2024-11-19T01:55:15.870Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:05.255 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:05.255 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:05.255 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83807' 00:14:05.255 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83807 00:14:05.255 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83807 00:14:05.255 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 83758 00:14:05.255 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83758 ']' 00:14:05.255 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83758 00:14:05.255 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:05.255 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:05.255 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83758 00:14:05.255 killing process with pid 83758 00:14:05.255 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:05.255 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:05.255 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83758' 00:14:05.255 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83758 00:14:05.255 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83758 00:14:05.514 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:05.514 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:05.514 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:05.514 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:05.514 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:14:05.514 "subsystems": [ 00:14:05.514 { 00:14:05.514 "subsystem": "keyring", 00:14:05.514 "config": [ 00:14:05.514 { 00:14:05.514 "method": "keyring_file_add_key", 00:14:05.514 "params": { 00:14:05.514 "name": "key0", 00:14:05.514 "path": "/tmp/tmp.NIhhAywiM7" 00:14:05.514 } 00:14:05.514 } 00:14:05.514 ] 00:14:05.514 }, 00:14:05.514 { 00:14:05.514 "subsystem": "iobuf", 00:14:05.514 "config": [ 00:14:05.514 { 00:14:05.514 "method": "iobuf_set_options", 00:14:05.514 "params": { 00:14:05.514 "small_pool_count": 8192, 00:14:05.514 "large_pool_count": 1024, 00:14:05.514 "small_bufsize": 8192, 00:14:05.514 "large_bufsize": 135168, 00:14:05.514 "enable_numa": false 00:14:05.514 } 00:14:05.514 } 00:14:05.514 ] 00:14:05.514 }, 00:14:05.514 { 00:14:05.514 "subsystem": "sock", 00:14:05.514 "config": [ 00:14:05.514 { 00:14:05.514 "method": "sock_set_default_impl", 00:14:05.514 "params": { 00:14:05.515 "impl_name": "uring" 00:14:05.515 } 00:14:05.515 }, 00:14:05.515 { 00:14:05.515 "method": "sock_impl_set_options", 00:14:05.515 "params": { 00:14:05.515 "impl_name": "ssl", 00:14:05.515 "recv_buf_size": 4096, 00:14:05.515 "send_buf_size": 4096, 00:14:05.515 "enable_recv_pipe": true, 00:14:05.515 "enable_quickack": false, 00:14:05.515 "enable_placement_id": 0, 00:14:05.515 "enable_zerocopy_send_server": true, 00:14:05.515 "enable_zerocopy_send_client": false, 00:14:05.515 "zerocopy_threshold": 0, 00:14:05.515 "tls_version": 0, 00:14:05.515 "enable_ktls": false 00:14:05.515 } 00:14:05.515 }, 00:14:05.515 { 00:14:05.515 "method": "sock_impl_set_options", 00:14:05.515 "params": { 00:14:05.515 "impl_name": "posix", 00:14:05.515 "recv_buf_size": 2097152, 00:14:05.515 "send_buf_size": 2097152, 00:14:05.515 "enable_recv_pipe": true, 00:14:05.515 "enable_quickack": false, 00:14:05.515 "enable_placement_id": 0, 00:14:05.515 "enable_zerocopy_send_server": true, 00:14:05.515 "enable_zerocopy_send_client": false, 00:14:05.515 "zerocopy_threshold": 0, 00:14:05.515 "tls_version": 0, 00:14:05.515 "enable_ktls": false 00:14:05.515 } 00:14:05.515 }, 00:14:05.515 { 00:14:05.515 "method": "sock_impl_set_options", 00:14:05.515 "params": { 00:14:05.515 "impl_name": "uring", 00:14:05.515 "recv_buf_size": 2097152, 00:14:05.515 "send_buf_size": 2097152, 00:14:05.515 "enable_recv_pipe": true, 00:14:05.515 "enable_quickack": false, 00:14:05.515 "enable_placement_id": 0, 00:14:05.515 "enable_zerocopy_send_server": false, 00:14:05.515 "enable_zerocopy_send_client": false, 00:14:05.515 "zerocopy_threshold": 0, 00:14:05.515 "tls_version": 0, 00:14:05.515 "enable_ktls": false 00:14:05.515 } 00:14:05.515 } 00:14:05.515 ] 00:14:05.515 }, 00:14:05.515 { 00:14:05.515 "subsystem": "vmd", 00:14:05.515 "config": [] 00:14:05.515 }, 00:14:05.515 { 00:14:05.515 "subsystem": "accel", 00:14:05.515 "config": [ 00:14:05.515 { 00:14:05.515 "method": "accel_set_options", 00:14:05.515 "params": { 00:14:05.515 "small_cache_size": 128, 00:14:05.515 "large_cache_size": 16, 00:14:05.515 "task_count": 2048, 00:14:05.515 "sequence_count": 2048, 00:14:05.515 "buf_count": 2048 00:14:05.515 } 00:14:05.515 } 00:14:05.515 ] 00:14:05.515 }, 00:14:05.515 { 00:14:05.515 "subsystem": "bdev", 00:14:05.515 "config": [ 00:14:05.515 { 00:14:05.515 "method": "bdev_set_options", 00:14:05.515 "params": { 00:14:05.515 "bdev_io_pool_size": 65535, 00:14:05.515 "bdev_io_cache_size": 256, 00:14:05.515 "bdev_auto_examine": true, 00:14:05.515 "iobuf_small_cache_size": 128, 00:14:05.515 "iobuf_large_cache_size": 16 00:14:05.515 } 00:14:05.515 }, 00:14:05.515 { 00:14:05.515 "method": "bdev_raid_set_options", 00:14:05.515 "params": { 00:14:05.515 "process_window_size_kb": 1024, 00:14:05.515 "process_max_bandwidth_mb_sec": 0 00:14:05.515 } 00:14:05.515 }, 00:14:05.515 { 00:14:05.515 "method": "bdev_iscsi_set_options", 00:14:05.515 "params": { 00:14:05.515 "timeout_sec": 30 00:14:05.515 } 00:14:05.515 }, 00:14:05.515 { 00:14:05.515 "method": "bdev_nvme_set_options", 00:14:05.515 "params": { 00:14:05.515 "action_on_timeout": "none", 00:14:05.515 "timeout_us": 0, 00:14:05.515 "timeout_admin_us": 0, 00:14:05.515 "keep_alive_timeout_ms": 10000, 00:14:05.515 "arbitration_burst": 0, 00:14:05.515 "low_priority_weight": 0, 00:14:05.515 "medium_priority_weight": 0, 00:14:05.515 "high_priority_weight": 0, 00:14:05.515 "nvme_adminq_poll_period_us": 10000, 00:14:05.515 "nvme_ioq_poll_period_us": 0, 00:14:05.515 "io_queue_requests": 0, 00:14:05.515 "delay_cmd_submit": true, 00:14:05.515 "transport_retry_count": 4, 00:14:05.515 "bdev_retry_count": 3, 00:14:05.515 "transport_ack_timeout": 0, 00:14:05.515 "ctrlr_loss_timeout_sec": 0, 00:14:05.515 "reconnect_delay_sec": 0, 00:14:05.515 "fast_io_fail_timeout_sec": 0, 00:14:05.515 "disable_auto_failback": false, 00:14:05.515 "generate_uuids": false, 00:14:05.515 "transport_tos": 0, 00:14:05.515 "nvme_error_stat": false, 00:14:05.515 "rdma_srq_size": 0, 00:14:05.515 "io_path_stat": false, 00:14:05.515 "allow_accel_sequence": false, 00:14:05.515 "rdma_max_cq_size": 0, 00:14:05.515 "rdma_cm_event_timeout_ms": 0, 00:14:05.515 "dhchap_digests": [ 00:14:05.515 "sha256", 00:14:05.515 "sha384", 00:14:05.515 "sha512" 00:14:05.515 ], 00:14:05.515 "dhchap_dhgroups": [ 00:14:05.515 "null", 00:14:05.515 "ffdhe2048", 00:14:05.515 "ffdhe3072", 00:14:05.515 "ffdhe4096", 00:14:05.515 "ffdhe6144", 00:14:05.515 "ffdhe8192" 00:14:05.515 ] 00:14:05.515 } 00:14:05.515 }, 00:14:05.515 { 00:14:05.515 "method": "bdev_nvme_set_hotplug", 00:14:05.515 "params": { 00:14:05.515 "period_us": 100000, 00:14:05.515 "enable": false 00:14:05.515 } 00:14:05.515 }, 00:14:05.515 { 00:14:05.515 "method": "bdev_malloc_create", 00:14:05.515 "params": { 00:14:05.515 "name": "malloc0", 00:14:05.515 "num_blocks": 8192, 00:14:05.515 "block_size": 4096, 00:14:05.515 "physical_block_size": 4096, 00:14:05.515 "uuid": "231455bf-b62e-4a95-a37c-c93028dbf959", 00:14:05.515 "optimal_io_boundary": 0, 00:14:05.516 "md_size": 0, 00:14:05.516 "dif_type": 0, 00:14:05.516 "dif_is_head_of_md": false, 00:14:05.516 "dif_pi_format": 0 00:14:05.516 } 00:14:05.516 }, 00:14:05.516 { 00:14:05.516 "method": "bdev_wait_for_examine" 00:14:05.516 } 00:14:05.516 ] 00:14:05.516 }, 00:14:05.516 { 00:14:05.516 "subsystem": "nbd", 00:14:05.516 "config": [] 00:14:05.516 }, 00:14:05.516 { 00:14:05.516 "subsystem": "scheduler", 00:14:05.516 "config": [ 00:14:05.516 { 00:14:05.516 "method": "framework_set_scheduler", 00:14:05.516 "params": { 00:14:05.516 "name": "static" 00:14:05.516 } 00:14:05.516 } 00:14:05.516 ] 00:14:05.516 }, 00:14:05.516 { 00:14:05.516 "subsystem": "nvmf", 00:14:05.516 "config": [ 00:14:05.516 { 00:14:05.516 "method": "nvmf_set_config", 00:14:05.516 "params": { 00:14:05.516 "discovery_filter": "match_any", 00:14:05.516 "admin_cmd_passthru": { 00:14:05.516 "identify_ctrlr": false 00:14:05.516 }, 00:14:05.516 "dhchap_digests": [ 00:14:05.516 "sha256", 00:14:05.516 "sha384", 00:14:05.516 "sha512" 00:14:05.516 ], 00:14:05.516 "dhchap_dhgroups": [ 00:14:05.516 "null", 00:14:05.516 "ffdhe2048", 00:14:05.516 "ffdhe3072", 00:14:05.516 "ffdhe4096", 00:14:05.516 "ffdhe6144", 00:14:05.516 "ffdhe8192" 00:14:05.516 ] 00:14:05.516 } 00:14:05.516 }, 00:14:05.516 { 00:14:05.516 "method": "nvmf_set_max_subsystems", 00:14:05.516 "params": { 00:14:05.516 "max_subsystems": 1024 00:14:05.516 } 00:14:05.516 }, 00:14:05.516 { 00:14:05.516 "method": "nvmf_set_crdt", 00:14:05.516 "params": { 00:14:05.516 "crdt1": 0, 00:14:05.516 "crdt2": 0, 00:14:05.516 "crdt3": 0 00:14:05.516 } 00:14:05.516 }, 00:14:05.516 { 00:14:05.516 "method": "nvmf_create_transport", 00:14:05.516 "params": { 00:14:05.516 "trtype": "TCP", 00:14:05.516 "max_queue_depth": 128, 00:14:05.516 "max_io_qpairs_per_ctrlr": 127, 00:14:05.516 "in_capsule_data_size": 4096, 00:14:05.516 "max_io_size": 131072, 00:14:05.516 "io_unit_size": 131072, 00:14:05.516 "max_aq_depth": 128, 00:14:05.516 "num_shared_buffers": 511, 00:14:05.516 "buf_cache_size": 4294967295, 00:14:05.516 "dif_insert_or_strip": false, 00:14:05.516 "zcopy": false, 00:14:05.516 "c2h_success": false, 00:14:05.516 "sock_priority": 0, 00:14:05.516 "abort_timeout_sec": 1, 00:14:05.516 "ack_timeout": 0, 00:14:05.516 "data_wr_pool_size": 0 00:14:05.516 } 00:14:05.516 }, 00:14:05.516 { 00:14:05.516 "method": "nvmf_create_subsystem", 00:14:05.516 "params": { 00:14:05.516 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:05.516 "allow_any_host": false, 00:14:05.516 "serial_number": "SPDK00000000000001", 00:14:05.516 "model_number": "SPDK bdev Controller", 00:14:05.516 "max_namespaces": 10, 00:14:05.516 "min_cntlid": 1, 00:14:05.516 "max_cntlid": 65519, 00:14:05.516 "ana_reporting": false 00:14:05.516 } 00:14:05.516 }, 00:14:05.516 { 00:14:05.516 "method": "nvmf_subsystem_add_host", 00:14:05.516 "params": { 00:14:05.516 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:05.516 "host": "nqn.2016-06.io.spdk:host1", 00:14:05.516 "psk": "key0" 00:14:05.516 } 00:14:05.516 }, 00:14:05.516 { 00:14:05.516 "method": "nvmf_subsystem_add_ns", 00:14:05.516 "params": { 00:14:05.516 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:05.516 "namespace": { 00:14:05.516 "nsid": 1, 00:14:05.516 "bdev_name": "malloc0", 00:14:05.516 "nguid": "231455BFB62E4A95A37CC93028DBF959", 00:14:05.516 "uuid": "231455bf-b62e-4a95-a37c-c93028dbf959", 00:14:05.516 "no_auto_visible": false 00:14:05.516 } 00:14:05.516 } 00:14:05.516 }, 00:14:05.516 { 00:14:05.516 "method": "nvmf_subsystem_add_listener", 00:14:05.516 "params": { 00:14:05.516 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:05.516 "listen_address": { 00:14:05.516 "trtype": "TCP", 00:14:05.516 "adrfam": "IPv4", 00:14:05.516 "traddr": "10.0.0.3", 00:14:05.516 "trsvcid": "4420" 00:14:05.516 }, 00:14:05.516 "secure_channel": true 00:14:05.516 } 00:14:05.516 } 00:14:05.516 ] 00:14:05.516 } 00:14:05.516 ] 00:14:05.516 }' 00:14:05.516 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83849 00:14:05.516 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:05.516 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83849 00:14:05.516 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83849 ']' 00:14:05.516 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.516 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:05.516 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.516 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:05.516 01:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:05.516 [2024-11-19 01:55:15.994596] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:14:05.516 [2024-11-19 01:55:15.994674] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.516 [2024-11-19 01:55:16.130725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.775 [2024-11-19 01:55:16.150693] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.775 [2024-11-19 01:55:16.150934] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.775 [2024-11-19 01:55:16.150955] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:05.775 [2024-11-19 01:55:16.150964] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:05.775 [2024-11-19 01:55:16.150972] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.775 [2024-11-19 01:55:16.151326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.775 [2024-11-19 01:55:16.291884] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:05.775 [2024-11-19 01:55:16.344872] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:05.775 [2024-11-19 01:55:16.376819] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:05.775 [2024-11-19 01:55:16.377018] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:06.710 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:06.710 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:06.710 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:06.710 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:06.710 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:06.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:06.710 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:06.711 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=83880 00:14:06.711 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 83880 /var/tmp/bdevperf.sock 00:14:06.711 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83880 ']' 00:14:06.711 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:06.711 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:06.711 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:06.711 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:06.711 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:06.711 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:06.711 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:14:06.711 "subsystems": [ 00:14:06.711 { 00:14:06.711 "subsystem": "keyring", 00:14:06.711 "config": [ 00:14:06.711 { 00:14:06.711 "method": "keyring_file_add_key", 00:14:06.711 "params": { 00:14:06.711 "name": "key0", 00:14:06.711 "path": "/tmp/tmp.NIhhAywiM7" 00:14:06.711 } 00:14:06.711 } 00:14:06.711 ] 00:14:06.711 }, 00:14:06.711 { 00:14:06.711 "subsystem": "iobuf", 00:14:06.711 "config": [ 00:14:06.711 { 00:14:06.711 "method": "iobuf_set_options", 00:14:06.711 "params": { 00:14:06.711 "small_pool_count": 8192, 00:14:06.711 "large_pool_count": 1024, 00:14:06.711 "small_bufsize": 8192, 00:14:06.711 "large_bufsize": 135168, 00:14:06.711 "enable_numa": false 00:14:06.711 } 00:14:06.711 } 00:14:06.711 ] 00:14:06.711 }, 00:14:06.711 { 00:14:06.711 "subsystem": "sock", 00:14:06.711 "config": [ 00:14:06.711 { 00:14:06.711 "method": "sock_set_default_impl", 00:14:06.711 "params": { 00:14:06.711 "impl_name": "uring" 00:14:06.711 } 00:14:06.711 }, 00:14:06.711 { 00:14:06.711 "method": "sock_impl_set_options", 00:14:06.711 "params": { 00:14:06.711 "impl_name": "ssl", 00:14:06.711 "recv_buf_size": 4096, 00:14:06.711 "send_buf_size": 4096, 00:14:06.711 "enable_recv_pipe": true, 00:14:06.711 "enable_quickack": false, 00:14:06.711 "enable_placement_id": 0, 00:14:06.711 "enable_zerocopy_send_server": true, 00:14:06.711 "enable_zerocopy_send_client": false, 00:14:06.711 "zerocopy_threshold": 0, 00:14:06.711 "tls_version": 0, 00:14:06.711 "enable_ktls": false 00:14:06.711 } 00:14:06.711 }, 00:14:06.711 { 00:14:06.711 "method": "sock_impl_set_options", 00:14:06.711 "params": { 00:14:06.711 "impl_name": "posix", 00:14:06.711 "recv_buf_size": 2097152, 00:14:06.711 "send_buf_size": 2097152, 00:14:06.711 "enable_recv_pipe": true, 00:14:06.711 "enable_quickack": false, 00:14:06.711 "enable_placement_id": 0, 00:14:06.711 "enable_zerocopy_send_server": true, 00:14:06.711 "enable_zerocopy_send_client": false, 00:14:06.711 "zerocopy_threshold": 0, 00:14:06.711 "tls_version": 0, 00:14:06.711 "enable_ktls": false 00:14:06.711 } 00:14:06.711 }, 00:14:06.711 { 00:14:06.711 "method": "sock_impl_set_options", 00:14:06.711 "params": { 00:14:06.711 "impl_name": "uring", 00:14:06.711 "recv_buf_size": 2097152, 00:14:06.711 "send_buf_size": 2097152, 00:14:06.711 "enable_recv_pipe": true, 00:14:06.711 "enable_quickack": false, 00:14:06.711 "enable_placement_id": 0, 00:14:06.711 "enable_zerocopy_send_server": false, 00:14:06.711 "enable_zerocopy_send_client": false, 00:14:06.711 "zerocopy_threshold": 0, 00:14:06.711 "tls_version": 0, 00:14:06.711 "enable_ktls": false 00:14:06.711 } 00:14:06.711 } 00:14:06.711 ] 00:14:06.711 }, 00:14:06.711 { 00:14:06.711 "subsystem": "vmd", 00:14:06.711 "config": [] 00:14:06.711 }, 00:14:06.711 { 00:14:06.711 "subsystem": "accel", 00:14:06.711 "config": [ 00:14:06.711 { 00:14:06.711 "method": "accel_set_options", 00:14:06.711 "params": { 00:14:06.711 "small_cache_size": 128, 00:14:06.711 "large_cache_size": 16, 00:14:06.711 "task_count": 2048, 00:14:06.711 "sequence_count": 2048, 00:14:06.711 "buf_count": 2048 00:14:06.711 } 00:14:06.711 } 00:14:06.711 ] 00:14:06.711 }, 00:14:06.711 { 00:14:06.711 "subsystem": "bdev", 00:14:06.711 "config": [ 00:14:06.711 { 00:14:06.711 "method": "bdev_set_options", 00:14:06.711 "params": { 00:14:06.711 "bdev_io_pool_size": 65535, 00:14:06.711 "bdev_io_cache_size": 256, 00:14:06.711 "bdev_auto_examine": true, 00:14:06.711 "iobuf_small_cache_size": 128, 00:14:06.711 "iobuf_large_cache_size": 16 00:14:06.711 } 00:14:06.711 }, 00:14:06.711 { 00:14:06.711 "method": "bdev_raid_set_options", 00:14:06.711 "params": { 00:14:06.711 "process_window_size_kb": 1024, 00:14:06.711 "process_max_bandwidth_mb_sec": 0 00:14:06.711 } 00:14:06.711 }, 00:14:06.711 { 00:14:06.711 "method": "bdev_iscsi_set_options", 00:14:06.711 "params": { 00:14:06.711 "timeout_sec": 30 00:14:06.711 } 00:14:06.711 }, 00:14:06.711 { 00:14:06.711 "method": "bdev_nvme_set_options", 00:14:06.711 "params": { 00:14:06.711 "action_on_timeout": "none", 00:14:06.711 "timeout_us": 0, 00:14:06.711 "timeout_admin_us": 0, 00:14:06.711 "keep_alive_timeout_ms": 10000, 00:14:06.711 "arbitration_burst": 0, 00:14:06.711 "low_priority_weight": 0, 00:14:06.711 "medium_priority_weight": 0, 00:14:06.711 "high_priority_weight": 0, 00:14:06.711 "nvme_adminq_poll_period_us": 10000, 00:14:06.711 "nvme_ioq_poll_period_us": 0, 00:14:06.711 "io_queue_requests": 512, 00:14:06.711 "delay_cmd_submit": true, 00:14:06.711 "transport_retry_count": 4, 00:14:06.711 "bdev_retry_count": 3, 00:14:06.711 "transport_ack_timeout": 0, 00:14:06.711 "ctrlr_loss_timeout_sec": 0, 00:14:06.711 "reconnect_delay_sec": 0, 00:14:06.711 "fast_io_fail_timeout_sec": 0, 00:14:06.711 "disable_auto_failback": false, 00:14:06.711 "generate_uuids": false, 00:14:06.711 "transport_tos": 0, 00:14:06.711 "nvme_error_stat": false, 00:14:06.711 "rdma_srq_size": 0, 00:14:06.711 "io_path_stat": false, 00:14:06.711 "allow_accel_sequence": false, 00:14:06.711 "rdma_max_cq_size": 0, 00:14:06.711 "rdma_cm_event_timeout_ms": 0, 00:14:06.711 "dhchap_digests": [ 00:14:06.711 "sha256", 00:14:06.711 "sha384", 00:14:06.711 "sha512" 00:14:06.711 ], 00:14:06.711 "dhchap_dhgroups": [ 00:14:06.711 "null", 00:14:06.711 "ffdhe2048", 00:14:06.711 "ffdhe3072", 00:14:06.711 "ffdhe4096", 00:14:06.711 "ffdhe6144", 00:14:06.711 "ffdhe8192" 00:14:06.711 ] 00:14:06.711 } 00:14:06.711 }, 00:14:06.711 { 00:14:06.711 "method": "bdev_nvme_attach_controller", 00:14:06.711 "params": { 00:14:06.711 "name": "TLSTEST", 00:14:06.711 "trtype": "TCP", 00:14:06.711 "adrfam": "IPv4", 00:14:06.711 "traddr": "10.0.0.3", 00:14:06.711 "trsvcid": "4420", 00:14:06.711 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:06.711 "prchk_reftag": false, 00:14:06.711 "prchk_guard": false, 00:14:06.711 "ctrlr_loss_timeout_sec": 0, 00:14:06.711 "reconnect_delay_sec": 0, 00:14:06.711 "fast_io_fail_timeout_sec": 0, 00:14:06.711 "psk": "key0", 00:14:06.711 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:06.711 "hdgst": false, 00:14:06.711 "ddgst": false, 00:14:06.711 "multipath": "multipath" 00:14:06.711 } 00:14:06.711 }, 00:14:06.711 { 00:14:06.711 "method": "bdev_nvme_set_hotplug", 00:14:06.711 "params": { 00:14:06.712 "period_us": 100000, 00:14:06.712 "enable": false 00:14:06.712 } 00:14:06.712 }, 00:14:06.712 { 00:14:06.712 "method": "bdev_wait_for_examine" 00:14:06.712 } 00:14:06.712 ] 00:14:06.712 }, 00:14:06.712 { 00:14:06.712 "subsystem": "nbd", 00:14:06.712 "config": [] 00:14:06.712 } 00:14:06.712 ] 00:14:06.712 }' 00:14:06.712 [2024-11-19 01:55:17.088219] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:14:06.712 [2024-11-19 01:55:17.088539] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83880 ] 00:14:06.712 [2024-11-19 01:55:17.242295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.712 [2024-11-19 01:55:17.266986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.969 [2024-11-19 01:55:17.382266] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:06.969 [2024-11-19 01:55:17.413771] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:07.535 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:07.535 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:07.535 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:07.793 Running I/O for 10 seconds... 00:14:09.661 4043.00 IOPS, 15.79 MiB/s [2024-11-19T01:55:21.213Z] 3934.50 IOPS, 15.37 MiB/s [2024-11-19T01:55:22.592Z] 3902.00 IOPS, 15.24 MiB/s [2024-11-19T01:55:23.529Z] 3877.25 IOPS, 15.15 MiB/s [2024-11-19T01:55:24.543Z] 3863.60 IOPS, 15.09 MiB/s [2024-11-19T01:55:25.478Z] 3889.17 IOPS, 15.19 MiB/s [2024-11-19T01:55:26.413Z] 3949.14 IOPS, 15.43 MiB/s [2024-11-19T01:55:27.349Z] 3988.38 IOPS, 15.58 MiB/s [2024-11-19T01:55:28.287Z] 4020.22 IOPS, 15.70 MiB/s [2024-11-19T01:55:28.287Z] 4046.30 IOPS, 15.81 MiB/s 00:14:17.673 Latency(us) 00:14:17.673 [2024-11-19T01:55:28.288Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.673 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:17.673 Verification LBA range: start 0x0 length 0x2000 00:14:17.673 TLSTESTn1 : 10.02 4052.24 15.83 0.00 0.00 31529.81 5779.08 25499.46 00:14:17.673 [2024-11-19T01:55:28.288Z] =================================================================================================================== 00:14:17.673 [2024-11-19T01:55:28.288Z] Total : 4052.24 15.83 0.00 0.00 31529.81 5779.08 25499.46 00:14:17.673 { 00:14:17.673 "results": [ 00:14:17.673 { 00:14:17.673 "job": "TLSTESTn1", 00:14:17.673 "core_mask": "0x4", 00:14:17.673 "workload": "verify", 00:14:17.673 "status": "finished", 00:14:17.673 "verify_range": { 00:14:17.673 "start": 0, 00:14:17.673 "length": 8192 00:14:17.673 }, 00:14:17.673 "queue_depth": 128, 00:14:17.673 "io_size": 4096, 00:14:17.673 "runtime": 10.01595, 00:14:17.673 "iops": 4052.236682491426, 00:14:17.673 "mibps": 15.829049540982133, 00:14:17.673 "io_failed": 0, 00:14:17.673 "io_timeout": 0, 00:14:17.673 "avg_latency_us": 31529.81288034458, 00:14:17.673 "min_latency_us": 5779.083636363636, 00:14:17.673 "max_latency_us": 25499.46181818182 00:14:17.673 } 00:14:17.673 ], 00:14:17.673 "core_count": 1 00:14:17.673 } 00:14:17.673 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:17.673 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 83880 00:14:17.673 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83880 ']' 00:14:17.673 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83880 00:14:17.673 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:17.673 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:17.673 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83880 00:14:17.673 killing process with pid 83880 00:14:17.673 Received shutdown signal, test time was about 10.000000 seconds 00:14:17.673 00:14:17.673 Latency(us) 00:14:17.673 [2024-11-19T01:55:28.288Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.673 [2024-11-19T01:55:28.288Z] =================================================================================================================== 00:14:17.673 [2024-11-19T01:55:28.288Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:17.673 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:17.673 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:17.673 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83880' 00:14:17.673 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83880 00:14:17.673 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83880 00:14:17.933 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 83849 00:14:17.933 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83849 ']' 00:14:17.933 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83849 00:14:17.933 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:17.933 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:17.933 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83849 00:14:17.933 killing process with pid 83849 00:14:17.933 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:17.933 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:17.933 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83849' 00:14:17.933 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83849 00:14:17.933 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83849 00:14:18.192 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:14:18.192 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:18.192 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:18.192 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:18.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.192 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84015 00:14:18.192 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:18.192 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84015 00:14:18.192 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84015 ']' 00:14:18.192 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.192 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:18.192 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.192 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:18.192 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:18.192 [2024-11-19 01:55:28.614834] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:14:18.192 [2024-11-19 01:55:28.615163] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.192 [2024-11-19 01:55:28.760036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.192 [2024-11-19 01:55:28.778224] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:18.192 [2024-11-19 01:55:28.778526] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:18.192 [2024-11-19 01:55:28.778710] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:18.192 [2024-11-19 01:55:28.778824] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:18.192 [2024-11-19 01:55:28.778928] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:18.192 [2024-11-19 01:55:28.779240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.192 [2024-11-19 01:55:28.809025] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:18.451 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:18.451 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:18.451 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:18.451 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:18.451 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:18.451 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:18.451 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.NIhhAywiM7 00:14:18.451 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.NIhhAywiM7 00:14:18.451 01:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:18.710 [2024-11-19 01:55:29.220956] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:18.710 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:18.968 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:19.536 [2024-11-19 01:55:29.865125] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:19.536 [2024-11-19 01:55:29.865365] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:19.536 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:19.795 malloc0 00:14:19.795 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:20.054 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.NIhhAywiM7 00:14:20.313 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:20.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:20.571 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=84063 00:14:20.571 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:20.571 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:20.571 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 84063 /var/tmp/bdevperf.sock 00:14:20.571 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84063 ']' 00:14:20.571 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:20.571 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:20.571 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:20.571 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:20.571 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:20.571 [2024-11-19 01:55:31.172572] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:14:20.571 [2024-11-19 01:55:31.172908] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84063 ] 00:14:20.829 [2024-11-19 01:55:31.324547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.829 [2024-11-19 01:55:31.350598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.829 [2024-11-19 01:55:31.385822] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:21.087 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:21.087 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:21.087 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NIhhAywiM7 00:14:21.345 01:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:21.604 [2024-11-19 01:55:32.148314] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:21.604 nvme0n1 00:14:21.863 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:21.863 Running I/O for 1 seconds... 00:14:22.800 4224.00 IOPS, 16.50 MiB/s 00:14:22.800 Latency(us) 00:14:22.800 [2024-11-19T01:55:33.415Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.800 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:22.800 Verification LBA range: start 0x0 length 0x2000 00:14:22.800 nvme0n1 : 1.02 4252.99 16.61 0.00 0.00 29804.78 7119.59 20494.89 00:14:22.800 [2024-11-19T01:55:33.415Z] =================================================================================================================== 00:14:22.800 [2024-11-19T01:55:33.415Z] Total : 4252.99 16.61 0.00 0.00 29804.78 7119.59 20494.89 00:14:22.800 { 00:14:22.800 "results": [ 00:14:22.800 { 00:14:22.800 "job": "nvme0n1", 00:14:22.800 "core_mask": "0x2", 00:14:22.800 "workload": "verify", 00:14:22.800 "status": "finished", 00:14:22.800 "verify_range": { 00:14:22.800 "start": 0, 00:14:22.800 "length": 8192 00:14:22.800 }, 00:14:22.800 "queue_depth": 128, 00:14:22.800 "io_size": 4096, 00:14:22.800 "runtime": 1.023281, 00:14:22.800 "iops": 4252.986227634443, 00:14:22.800 "mibps": 16.613227451697043, 00:14:22.800 "io_failed": 0, 00:14:22.800 "io_timeout": 0, 00:14:22.800 "avg_latency_us": 29804.776042780744, 00:14:22.800 "min_latency_us": 7119.592727272728, 00:14:22.800 "max_latency_us": 20494.894545454546 00:14:22.800 } 00:14:22.800 ], 00:14:22.800 "core_count": 1 00:14:22.800 } 00:14:22.800 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 84063 00:14:22.800 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84063 ']' 00:14:22.800 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84063 00:14:22.800 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:22.800 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:22.800 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84063 00:14:22.800 killing process with pid 84063 00:14:22.800 Received shutdown signal, test time was about 1.000000 seconds 00:14:22.800 00:14:22.800 Latency(us) 00:14:22.800 [2024-11-19T01:55:33.415Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.800 [2024-11-19T01:55:33.415Z] =================================================================================================================== 00:14:22.800 [2024-11-19T01:55:33.415Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:22.800 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:22.800 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:22.800 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84063' 00:14:22.800 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84063 00:14:22.800 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84063 00:14:23.059 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 84015 00:14:23.059 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84015 ']' 00:14:23.059 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84015 00:14:23.060 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:23.060 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:23.060 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84015 00:14:23.060 killing process with pid 84015 00:14:23.060 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:23.060 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:23.060 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84015' 00:14:23.060 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84015 00:14:23.060 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84015 00:14:23.319 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:14:23.319 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:23.319 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:23.319 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:23.319 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84107 00:14:23.319 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:23.319 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84107 00:14:23.319 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84107 ']' 00:14:23.319 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.319 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:23.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.319 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.319 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:23.319 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:23.319 [2024-11-19 01:55:33.781581] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:14:23.319 [2024-11-19 01:55:33.781689] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.319 [2024-11-19 01:55:33.925124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.579 [2024-11-19 01:55:33.945036] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.579 [2024-11-19 01:55:33.945095] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.579 [2024-11-19 01:55:33.945123] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:23.579 [2024-11-19 01:55:33.945136] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:23.579 [2024-11-19 01:55:33.945143] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.579 [2024-11-19 01:55:33.945462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.579 [2024-11-19 01:55:33.975254] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:23.579 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:23.579 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:23.579 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:23.579 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:23.579 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:23.579 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:23.579 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:14:23.579 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.579 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:23.579 [2024-11-19 01:55:34.076135] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:23.579 malloc0 00:14:23.579 [2024-11-19 01:55:34.102697] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:23.579 [2024-11-19 01:55:34.102940] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:23.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:23.579 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.579 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=84130 00:14:23.579 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:23.579 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 84130 /var/tmp/bdevperf.sock 00:14:23.579 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84130 ']' 00:14:23.579 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:23.579 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:23.579 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:23.579 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:23.579 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:23.579 [2024-11-19 01:55:34.190476] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:14:23.579 [2024-11-19 01:55:34.190779] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84130 ] 00:14:23.838 [2024-11-19 01:55:34.337102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.838 [2024-11-19 01:55:34.358922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.838 [2024-11-19 01:55:34.386525] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:23.838 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:23.838 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:23.838 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NIhhAywiM7 00:14:24.097 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:24.356 [2024-11-19 01:55:34.953538] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:24.615 nvme0n1 00:14:24.615 01:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:24.615 Running I/O for 1 seconds... 00:14:25.811 4096.00 IOPS, 16.00 MiB/s 00:14:25.811 Latency(us) 00:14:25.811 [2024-11-19T01:55:36.427Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.812 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:25.812 Verification LBA range: start 0x0 length 0x2000 00:14:25.812 nvme0n1 : 1.02 4135.51 16.15 0.00 0.00 30621.06 7119.59 20733.21 00:14:25.812 [2024-11-19T01:55:36.427Z] =================================================================================================================== 00:14:25.812 [2024-11-19T01:55:36.427Z] Total : 4135.51 16.15 0.00 0.00 30621.06 7119.59 20733.21 00:14:25.812 { 00:14:25.812 "results": [ 00:14:25.812 { 00:14:25.812 "job": "nvme0n1", 00:14:25.812 "core_mask": "0x2", 00:14:25.812 "workload": "verify", 00:14:25.812 "status": "finished", 00:14:25.812 "verify_range": { 00:14:25.812 "start": 0, 00:14:25.812 "length": 8192 00:14:25.812 }, 00:14:25.812 "queue_depth": 128, 00:14:25.812 "io_size": 4096, 00:14:25.812 "runtime": 1.021398, 00:14:25.812 "iops": 4135.508391439967, 00:14:25.812 "mibps": 16.154329654062373, 00:14:25.812 "io_failed": 0, 00:14:25.812 "io_timeout": 0, 00:14:25.812 "avg_latency_us": 30621.05917355372, 00:14:25.812 "min_latency_us": 7119.592727272728, 00:14:25.812 "max_latency_us": 20733.20727272727 00:14:25.812 } 00:14:25.812 ], 00:14:25.812 "core_count": 1 00:14:25.812 } 00:14:25.812 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:14:25.812 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.812 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:25.812 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.812 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:14:25.812 "subsystems": [ 00:14:25.812 { 00:14:25.812 "subsystem": "keyring", 00:14:25.812 "config": [ 00:14:25.812 { 00:14:25.812 "method": "keyring_file_add_key", 00:14:25.812 "params": { 00:14:25.812 "name": "key0", 00:14:25.812 "path": "/tmp/tmp.NIhhAywiM7" 00:14:25.812 } 00:14:25.812 } 00:14:25.812 ] 00:14:25.812 }, 00:14:25.812 { 00:14:25.812 "subsystem": "iobuf", 00:14:25.812 "config": [ 00:14:25.812 { 00:14:25.812 "method": "iobuf_set_options", 00:14:25.812 "params": { 00:14:25.812 "small_pool_count": 8192, 00:14:25.812 "large_pool_count": 1024, 00:14:25.812 "small_bufsize": 8192, 00:14:25.812 "large_bufsize": 135168, 00:14:25.812 "enable_numa": false 00:14:25.812 } 00:14:25.812 } 00:14:25.812 ] 00:14:25.812 }, 00:14:25.812 { 00:14:25.812 "subsystem": "sock", 00:14:25.812 "config": [ 00:14:25.812 { 00:14:25.812 "method": "sock_set_default_impl", 00:14:25.812 "params": { 00:14:25.812 "impl_name": "uring" 00:14:25.812 } 00:14:25.812 }, 00:14:25.812 { 00:14:25.812 "method": "sock_impl_set_options", 00:14:25.812 "params": { 00:14:25.812 "impl_name": "ssl", 00:14:25.812 "recv_buf_size": 4096, 00:14:25.812 "send_buf_size": 4096, 00:14:25.812 "enable_recv_pipe": true, 00:14:25.812 "enable_quickack": false, 00:14:25.812 "enable_placement_id": 0, 00:14:25.812 "enable_zerocopy_send_server": true, 00:14:25.812 "enable_zerocopy_send_client": false, 00:14:25.812 "zerocopy_threshold": 0, 00:14:25.812 "tls_version": 0, 00:14:25.812 "enable_ktls": false 00:14:25.812 } 00:14:25.812 }, 00:14:25.812 { 00:14:25.812 "method": "sock_impl_set_options", 00:14:25.812 "params": { 00:14:25.812 "impl_name": "posix", 00:14:25.812 "recv_buf_size": 2097152, 00:14:25.812 "send_buf_size": 2097152, 00:14:25.812 "enable_recv_pipe": true, 00:14:25.812 "enable_quickack": false, 00:14:25.812 "enable_placement_id": 0, 00:14:25.812 "enable_zerocopy_send_server": true, 00:14:25.812 "enable_zerocopy_send_client": false, 00:14:25.812 "zerocopy_threshold": 0, 00:14:25.812 "tls_version": 0, 00:14:25.812 "enable_ktls": false 00:14:25.812 } 00:14:25.812 }, 00:14:25.812 { 00:14:25.812 "method": "sock_impl_set_options", 00:14:25.812 "params": { 00:14:25.812 "impl_name": "uring", 00:14:25.812 "recv_buf_size": 2097152, 00:14:25.812 "send_buf_size": 2097152, 00:14:25.812 "enable_recv_pipe": true, 00:14:25.812 "enable_quickack": false, 00:14:25.812 "enable_placement_id": 0, 00:14:25.812 "enable_zerocopy_send_server": false, 00:14:25.812 "enable_zerocopy_send_client": false, 00:14:25.812 "zerocopy_threshold": 0, 00:14:25.812 "tls_version": 0, 00:14:25.812 "enable_ktls": false 00:14:25.812 } 00:14:25.812 } 00:14:25.812 ] 00:14:25.812 }, 00:14:25.812 { 00:14:25.812 "subsystem": "vmd", 00:14:25.812 "config": [] 00:14:25.812 }, 00:14:25.812 { 00:14:25.812 "subsystem": "accel", 00:14:25.812 "config": [ 00:14:25.812 { 00:14:25.812 "method": "accel_set_options", 00:14:25.812 "params": { 00:14:25.812 "small_cache_size": 128, 00:14:25.812 "large_cache_size": 16, 00:14:25.812 "task_count": 2048, 00:14:25.812 "sequence_count": 2048, 00:14:25.812 "buf_count": 2048 00:14:25.812 } 00:14:25.812 } 00:14:25.812 ] 00:14:25.812 }, 00:14:25.812 { 00:14:25.812 "subsystem": "bdev", 00:14:25.812 "config": [ 00:14:25.812 { 00:14:25.812 "method": "bdev_set_options", 00:14:25.812 "params": { 00:14:25.812 "bdev_io_pool_size": 65535, 00:14:25.812 "bdev_io_cache_size": 256, 00:14:25.812 "bdev_auto_examine": true, 00:14:25.812 "iobuf_small_cache_size": 128, 00:14:25.812 "iobuf_large_cache_size": 16 00:14:25.812 } 00:14:25.812 }, 00:14:25.812 { 00:14:25.812 "method": "bdev_raid_set_options", 00:14:25.812 "params": { 00:14:25.812 "process_window_size_kb": 1024, 00:14:25.812 "process_max_bandwidth_mb_sec": 0 00:14:25.812 } 00:14:25.812 }, 00:14:25.812 { 00:14:25.812 "method": "bdev_iscsi_set_options", 00:14:25.812 "params": { 00:14:25.812 "timeout_sec": 30 00:14:25.812 } 00:14:25.812 }, 00:14:25.812 { 00:14:25.812 "method": "bdev_nvme_set_options", 00:14:25.812 "params": { 00:14:25.812 "action_on_timeout": "none", 00:14:25.812 "timeout_us": 0, 00:14:25.812 "timeout_admin_us": 0, 00:14:25.812 "keep_alive_timeout_ms": 10000, 00:14:25.812 "arbitration_burst": 0, 00:14:25.812 "low_priority_weight": 0, 00:14:25.812 "medium_priority_weight": 0, 00:14:25.812 "high_priority_weight": 0, 00:14:25.812 "nvme_adminq_poll_period_us": 10000, 00:14:25.812 "nvme_ioq_poll_period_us": 0, 00:14:25.812 "io_queue_requests": 0, 00:14:25.812 "delay_cmd_submit": true, 00:14:25.812 "transport_retry_count": 4, 00:14:25.812 "bdev_retry_count": 3, 00:14:25.812 "transport_ack_timeout": 0, 00:14:25.812 "ctrlr_loss_timeout_sec": 0, 00:14:25.812 "reconnect_delay_sec": 0, 00:14:25.812 "fast_io_fail_timeout_sec": 0, 00:14:25.812 "disable_auto_failback": false, 00:14:25.812 "generate_uuids": false, 00:14:25.812 "transport_tos": 0, 00:14:25.812 "nvme_error_stat": false, 00:14:25.812 "rdma_srq_size": 0, 00:14:25.812 "io_path_stat": false, 00:14:25.812 "allow_accel_sequence": false, 00:14:25.812 "rdma_max_cq_size": 0, 00:14:25.812 "rdma_cm_event_timeout_ms": 0, 00:14:25.812 "dhchap_digests": [ 00:14:25.812 "sha256", 00:14:25.812 "sha384", 00:14:25.812 "sha512" 00:14:25.812 ], 00:14:25.812 "dhchap_dhgroups": [ 00:14:25.812 "null", 00:14:25.812 "ffdhe2048", 00:14:25.812 "ffdhe3072", 00:14:25.812 "ffdhe4096", 00:14:25.812 "ffdhe6144", 00:14:25.812 "ffdhe8192" 00:14:25.812 ] 00:14:25.812 } 00:14:25.812 }, 00:14:25.812 { 00:14:25.812 "method": "bdev_nvme_set_hotplug", 00:14:25.812 "params": { 00:14:25.812 "period_us": 100000, 00:14:25.812 "enable": false 00:14:25.812 } 00:14:25.812 }, 00:14:25.812 { 00:14:25.812 "method": "bdev_malloc_create", 00:14:25.812 "params": { 00:14:25.812 "name": "malloc0", 00:14:25.812 "num_blocks": 8192, 00:14:25.812 "block_size": 4096, 00:14:25.812 "physical_block_size": 4096, 00:14:25.812 "uuid": "06743a4c-d933-45fe-8b5a-431b43808a84", 00:14:25.812 "optimal_io_boundary": 0, 00:14:25.812 "md_size": 0, 00:14:25.812 "dif_type": 0, 00:14:25.812 "dif_is_head_of_md": false, 00:14:25.812 "dif_pi_format": 0 00:14:25.812 } 00:14:25.812 }, 00:14:25.812 { 00:14:25.812 "method": "bdev_wait_for_examine" 00:14:25.812 } 00:14:25.812 ] 00:14:25.812 }, 00:14:25.812 { 00:14:25.812 "subsystem": "nbd", 00:14:25.812 "config": [] 00:14:25.812 }, 00:14:25.812 { 00:14:25.812 "subsystem": "scheduler", 00:14:25.812 "config": [ 00:14:25.812 { 00:14:25.812 "method": "framework_set_scheduler", 00:14:25.812 "params": { 00:14:25.812 "name": "static" 00:14:25.812 } 00:14:25.812 } 00:14:25.812 ] 00:14:25.812 }, 00:14:25.812 { 00:14:25.812 "subsystem": "nvmf", 00:14:25.812 "config": [ 00:14:25.812 { 00:14:25.812 "method": "nvmf_set_config", 00:14:25.813 "params": { 00:14:25.813 "discovery_filter": "match_any", 00:14:25.813 "admin_cmd_passthru": { 00:14:25.813 "identify_ctrlr": false 00:14:25.813 }, 00:14:25.813 "dhchap_digests": [ 00:14:25.813 "sha256", 00:14:25.813 "sha384", 00:14:25.813 "sha512" 00:14:25.813 ], 00:14:25.813 "dhchap_dhgroups": [ 00:14:25.813 "null", 00:14:25.813 "ffdhe2048", 00:14:25.813 "ffdhe3072", 00:14:25.813 "ffdhe4096", 00:14:25.813 "ffdhe6144", 00:14:25.813 "ffdhe8192" 00:14:25.813 ] 00:14:25.813 } 00:14:25.813 }, 00:14:25.813 { 00:14:25.813 "method": "nvmf_set_max_subsystems", 00:14:25.813 "params": { 00:14:25.813 "max_subsystems": 1024 00:14:25.813 } 00:14:25.813 }, 00:14:25.813 { 00:14:25.813 "method": "nvmf_set_crdt", 00:14:25.813 "params": { 00:14:25.813 "crdt1": 0, 00:14:25.813 "crdt2": 0, 00:14:25.813 "crdt3": 0 00:14:25.813 } 00:14:25.813 }, 00:14:25.813 { 00:14:25.813 "method": "nvmf_create_transport", 00:14:25.813 "params": { 00:14:25.813 "trtype": "TCP", 00:14:25.813 "max_queue_depth": 128, 00:14:25.813 "max_io_qpairs_per_ctrlr": 127, 00:14:25.813 "in_capsule_data_size": 4096, 00:14:25.813 "max_io_size": 131072, 00:14:25.813 "io_unit_size": 131072, 00:14:25.813 "max_aq_depth": 128, 00:14:25.813 "num_shared_buffers": 511, 00:14:25.813 "buf_cache_size": 4294967295, 00:14:25.813 "dif_insert_or_strip": false, 00:14:25.813 "zcopy": false, 00:14:25.813 "c2h_success": false, 00:14:25.813 "sock_priority": 0, 00:14:25.813 "abort_timeout_sec": 1, 00:14:25.813 "ack_timeout": 0, 00:14:25.813 "data_wr_pool_size": 0 00:14:25.813 } 00:14:25.813 }, 00:14:25.813 { 00:14:25.813 "method": "nvmf_create_subsystem", 00:14:25.813 "params": { 00:14:25.813 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:25.813 "allow_any_host": false, 00:14:25.813 "serial_number": "00000000000000000000", 00:14:25.813 "model_number": "SPDK bdev Controller", 00:14:25.813 "max_namespaces": 32, 00:14:25.813 "min_cntlid": 1, 00:14:25.813 "max_cntlid": 65519, 00:14:25.813 "ana_reporting": false 00:14:25.813 } 00:14:25.813 }, 00:14:25.813 { 00:14:25.813 "method": "nvmf_subsystem_add_host", 00:14:25.813 "params": { 00:14:25.813 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:25.813 "host": "nqn.2016-06.io.spdk:host1", 00:14:25.813 "psk": "key0" 00:14:25.813 } 00:14:25.813 }, 00:14:25.813 { 00:14:25.813 "method": "nvmf_subsystem_add_ns", 00:14:25.813 "params": { 00:14:25.813 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:25.813 "namespace": { 00:14:25.813 "nsid": 1, 00:14:25.813 "bdev_name": "malloc0", 00:14:25.813 "nguid": "06743A4CD93345FE8B5A431B43808A84", 00:14:25.813 "uuid": "06743a4c-d933-45fe-8b5a-431b43808a84", 00:14:25.813 "no_auto_visible": false 00:14:25.813 } 00:14:25.813 } 00:14:25.813 }, 00:14:25.813 { 00:14:25.813 "method": "nvmf_subsystem_add_listener", 00:14:25.813 "params": { 00:14:25.813 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:25.813 "listen_address": { 00:14:25.813 "trtype": "TCP", 00:14:25.813 "adrfam": "IPv4", 00:14:25.813 "traddr": "10.0.0.3", 00:14:25.813 "trsvcid": "4420" 00:14:25.813 }, 00:14:25.813 "secure_channel": false, 00:14:25.813 "sock_impl": "ssl" 00:14:25.813 } 00:14:25.813 } 00:14:25.813 ] 00:14:25.813 } 00:14:25.813 ] 00:14:25.813 }' 00:14:25.813 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:26.072 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:14:26.072 "subsystems": [ 00:14:26.072 { 00:14:26.072 "subsystem": "keyring", 00:14:26.072 "config": [ 00:14:26.072 { 00:14:26.072 "method": "keyring_file_add_key", 00:14:26.072 "params": { 00:14:26.072 "name": "key0", 00:14:26.072 "path": "/tmp/tmp.NIhhAywiM7" 00:14:26.072 } 00:14:26.072 } 00:14:26.072 ] 00:14:26.072 }, 00:14:26.072 { 00:14:26.072 "subsystem": "iobuf", 00:14:26.072 "config": [ 00:14:26.072 { 00:14:26.072 "method": "iobuf_set_options", 00:14:26.072 "params": { 00:14:26.072 "small_pool_count": 8192, 00:14:26.072 "large_pool_count": 1024, 00:14:26.072 "small_bufsize": 8192, 00:14:26.072 "large_bufsize": 135168, 00:14:26.072 "enable_numa": false 00:14:26.072 } 00:14:26.072 } 00:14:26.072 ] 00:14:26.072 }, 00:14:26.072 { 00:14:26.072 "subsystem": "sock", 00:14:26.072 "config": [ 00:14:26.072 { 00:14:26.072 "method": "sock_set_default_impl", 00:14:26.072 "params": { 00:14:26.072 "impl_name": "uring" 00:14:26.072 } 00:14:26.072 }, 00:14:26.072 { 00:14:26.072 "method": "sock_impl_set_options", 00:14:26.072 "params": { 00:14:26.072 "impl_name": "ssl", 00:14:26.072 "recv_buf_size": 4096, 00:14:26.072 "send_buf_size": 4096, 00:14:26.072 "enable_recv_pipe": true, 00:14:26.072 "enable_quickack": false, 00:14:26.072 "enable_placement_id": 0, 00:14:26.072 "enable_zerocopy_send_server": true, 00:14:26.072 "enable_zerocopy_send_client": false, 00:14:26.072 "zerocopy_threshold": 0, 00:14:26.072 "tls_version": 0, 00:14:26.072 "enable_ktls": false 00:14:26.072 } 00:14:26.072 }, 00:14:26.072 { 00:14:26.072 "method": "sock_impl_set_options", 00:14:26.072 "params": { 00:14:26.072 "impl_name": "posix", 00:14:26.072 "recv_buf_size": 2097152, 00:14:26.072 "send_buf_size": 2097152, 00:14:26.072 "enable_recv_pipe": true, 00:14:26.072 "enable_quickack": false, 00:14:26.072 "enable_placement_id": 0, 00:14:26.072 "enable_zerocopy_send_server": true, 00:14:26.072 "enable_zerocopy_send_client": false, 00:14:26.072 "zerocopy_threshold": 0, 00:14:26.072 "tls_version": 0, 00:14:26.072 "enable_ktls": false 00:14:26.072 } 00:14:26.072 }, 00:14:26.072 { 00:14:26.072 "method": "sock_impl_set_options", 00:14:26.072 "params": { 00:14:26.072 "impl_name": "uring", 00:14:26.072 "recv_buf_size": 2097152, 00:14:26.072 "send_buf_size": 2097152, 00:14:26.072 "enable_recv_pipe": true, 00:14:26.072 "enable_quickack": false, 00:14:26.072 "enable_placement_id": 0, 00:14:26.072 "enable_zerocopy_send_server": false, 00:14:26.072 "enable_zerocopy_send_client": false, 00:14:26.072 "zerocopy_threshold": 0, 00:14:26.072 "tls_version": 0, 00:14:26.072 "enable_ktls": false 00:14:26.072 } 00:14:26.072 } 00:14:26.072 ] 00:14:26.072 }, 00:14:26.072 { 00:14:26.072 "subsystem": "vmd", 00:14:26.072 "config": [] 00:14:26.072 }, 00:14:26.072 { 00:14:26.072 "subsystem": "accel", 00:14:26.072 "config": [ 00:14:26.072 { 00:14:26.072 "method": "accel_set_options", 00:14:26.072 "params": { 00:14:26.072 "small_cache_size": 128, 00:14:26.072 "large_cache_size": 16, 00:14:26.072 "task_count": 2048, 00:14:26.072 "sequence_count": 2048, 00:14:26.072 "buf_count": 2048 00:14:26.072 } 00:14:26.072 } 00:14:26.072 ] 00:14:26.072 }, 00:14:26.072 { 00:14:26.072 "subsystem": "bdev", 00:14:26.072 "config": [ 00:14:26.072 { 00:14:26.072 "method": "bdev_set_options", 00:14:26.072 "params": { 00:14:26.072 "bdev_io_pool_size": 65535, 00:14:26.072 "bdev_io_cache_size": 256, 00:14:26.072 "bdev_auto_examine": true, 00:14:26.072 "iobuf_small_cache_size": 128, 00:14:26.072 "iobuf_large_cache_size": 16 00:14:26.072 } 00:14:26.072 }, 00:14:26.072 { 00:14:26.072 "method": "bdev_raid_set_options", 00:14:26.072 "params": { 00:14:26.072 "process_window_size_kb": 1024, 00:14:26.072 "process_max_bandwidth_mb_sec": 0 00:14:26.072 } 00:14:26.072 }, 00:14:26.072 { 00:14:26.072 "method": "bdev_iscsi_set_options", 00:14:26.072 "params": { 00:14:26.072 "timeout_sec": 30 00:14:26.072 } 00:14:26.072 }, 00:14:26.072 { 00:14:26.072 "method": "bdev_nvme_set_options", 00:14:26.072 "params": { 00:14:26.072 "action_on_timeout": "none", 00:14:26.072 "timeout_us": 0, 00:14:26.072 "timeout_admin_us": 0, 00:14:26.072 "keep_alive_timeout_ms": 10000, 00:14:26.072 "arbitration_burst": 0, 00:14:26.072 "low_priority_weight": 0, 00:14:26.072 "medium_priority_weight": 0, 00:14:26.072 "high_priority_weight": 0, 00:14:26.072 "nvme_adminq_poll_period_us": 10000, 00:14:26.072 "nvme_ioq_poll_period_us": 0, 00:14:26.072 "io_queue_requests": 512, 00:14:26.072 "delay_cmd_submit": true, 00:14:26.072 "transport_retry_count": 4, 00:14:26.072 "bdev_retry_count": 3, 00:14:26.072 "transport_ack_timeout": 0, 00:14:26.072 "ctrlr_loss_timeout_sec": 0, 00:14:26.072 "reconnect_delay_sec": 0, 00:14:26.072 "fast_io_fail_timeout_sec": 0, 00:14:26.072 "disable_auto_failback": false, 00:14:26.072 "generate_uuids": false, 00:14:26.072 "transport_tos": 0, 00:14:26.072 "nvme_error_stat": false, 00:14:26.072 "rdma_srq_size": 0, 00:14:26.072 "io_path_stat": false, 00:14:26.072 "allow_accel_sequence": false, 00:14:26.072 "rdma_max_cq_size": 0, 00:14:26.072 "rdma_cm_event_timeout_ms": 0, 00:14:26.072 "dhchap_digests": [ 00:14:26.072 "sha256", 00:14:26.072 "sha384", 00:14:26.072 "sha512" 00:14:26.072 ], 00:14:26.072 "dhchap_dhgroups": [ 00:14:26.072 "null", 00:14:26.072 "ffdhe2048", 00:14:26.072 "ffdhe3072", 00:14:26.072 "ffdhe4096", 00:14:26.072 "ffdhe6144", 00:14:26.072 "ffdhe8192" 00:14:26.072 ] 00:14:26.072 } 00:14:26.072 }, 00:14:26.072 { 00:14:26.072 "method": "bdev_nvme_attach_controller", 00:14:26.072 "params": { 00:14:26.072 "name": "nvme0", 00:14:26.072 "trtype": "TCP", 00:14:26.072 "adrfam": "IPv4", 00:14:26.073 "traddr": "10.0.0.3", 00:14:26.073 "trsvcid": "4420", 00:14:26.073 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.073 "prchk_reftag": false, 00:14:26.073 "prchk_guard": false, 00:14:26.073 "ctrlr_loss_timeout_sec": 0, 00:14:26.073 "reconnect_delay_sec": 0, 00:14:26.073 "fast_io_fail_timeout_sec": 0, 00:14:26.073 "psk": "key0", 00:14:26.073 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:26.073 "hdgst": false, 00:14:26.073 "ddgst": false, 00:14:26.073 "multipath": "multipath" 00:14:26.073 } 00:14:26.073 }, 00:14:26.073 { 00:14:26.073 "method": "bdev_nvme_set_hotplug", 00:14:26.073 "params": { 00:14:26.073 "period_us": 100000, 00:14:26.073 "enable": false 00:14:26.073 } 00:14:26.073 }, 00:14:26.073 { 00:14:26.073 "method": "bdev_enable_histogram", 00:14:26.073 "params": { 00:14:26.073 "name": "nvme0n1", 00:14:26.073 "enable": true 00:14:26.073 } 00:14:26.073 }, 00:14:26.073 { 00:14:26.073 "method": "bdev_wait_for_examine" 00:14:26.073 } 00:14:26.073 ] 00:14:26.073 }, 00:14:26.073 { 00:14:26.073 "subsystem": "nbd", 00:14:26.073 "config": [] 00:14:26.073 } 00:14:26.073 ] 00:14:26.073 }' 00:14:26.073 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 84130 00:14:26.073 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84130 ']' 00:14:26.073 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84130 00:14:26.073 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:26.331 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:26.331 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84130 00:14:26.331 killing process with pid 84130 00:14:26.331 Received shutdown signal, test time was about 1.000000 seconds 00:14:26.331 00:14:26.331 Latency(us) 00:14:26.331 [2024-11-19T01:55:36.946Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.331 [2024-11-19T01:55:36.946Z] =================================================================================================================== 00:14:26.331 [2024-11-19T01:55:36.946Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:26.331 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:26.331 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:26.331 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84130' 00:14:26.331 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84130 00:14:26.332 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84130 00:14:26.332 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 84107 00:14:26.332 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84107 ']' 00:14:26.332 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84107 00:14:26.332 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:26.332 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:26.332 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84107 00:14:26.332 killing process with pid 84107 00:14:26.332 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:26.332 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:26.332 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84107' 00:14:26.332 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84107 00:14:26.332 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84107 00:14:26.591 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:14:26.591 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:26.591 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:26.591 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:14:26.591 "subsystems": [ 00:14:26.591 { 00:14:26.591 "subsystem": "keyring", 00:14:26.591 "config": [ 00:14:26.591 { 00:14:26.591 "method": "keyring_file_add_key", 00:14:26.591 "params": { 00:14:26.591 "name": "key0", 00:14:26.591 "path": "/tmp/tmp.NIhhAywiM7" 00:14:26.591 } 00:14:26.591 } 00:14:26.591 ] 00:14:26.591 }, 00:14:26.591 { 00:14:26.591 "subsystem": "iobuf", 00:14:26.591 "config": [ 00:14:26.591 { 00:14:26.591 "method": "iobuf_set_options", 00:14:26.591 "params": { 00:14:26.591 "small_pool_count": 8192, 00:14:26.591 "large_pool_count": 1024, 00:14:26.591 "small_bufsize": 8192, 00:14:26.591 "large_bufsize": 135168, 00:14:26.591 "enable_numa": false 00:14:26.591 } 00:14:26.591 } 00:14:26.591 ] 00:14:26.591 }, 00:14:26.591 { 00:14:26.591 "subsystem": "sock", 00:14:26.591 "config": [ 00:14:26.591 { 00:14:26.591 "method": "sock_set_default_impl", 00:14:26.591 "params": { 00:14:26.591 "impl_name": "uring" 00:14:26.591 } 00:14:26.591 }, 00:14:26.591 { 00:14:26.591 "method": "sock_impl_set_options", 00:14:26.591 "params": { 00:14:26.591 "impl_name": "ssl", 00:14:26.591 "recv_buf_size": 4096, 00:14:26.591 "send_buf_size": 4096, 00:14:26.591 "enable_recv_pipe": true, 00:14:26.591 "enable_quickack": false, 00:14:26.591 "enable_placement_id": 0, 00:14:26.591 "enable_zerocopy_send_server": true, 00:14:26.591 "enable_zerocopy_send_client": false, 00:14:26.591 "zerocopy_threshold": 0, 00:14:26.591 "tls_version": 0, 00:14:26.591 "enable_ktls": false 00:14:26.591 } 00:14:26.591 }, 00:14:26.591 { 00:14:26.591 "method": "sock_impl_set_options", 00:14:26.591 "params": { 00:14:26.591 "impl_name": "posix", 00:14:26.591 "recv_buf_size": 2097152, 00:14:26.591 "send_buf_size": 2097152, 00:14:26.591 "enable_recv_pipe": true, 00:14:26.591 "enable_quickack": false, 00:14:26.591 "enable_placement_id": 0, 00:14:26.591 "enable_zerocopy_send_server": true, 00:14:26.591 "enable_zerocopy_send_client": false, 00:14:26.591 "zerocopy_threshold": 0, 00:14:26.591 "tls_version": 0, 00:14:26.591 "enable_ktls": false 00:14:26.591 } 00:14:26.591 }, 00:14:26.591 { 00:14:26.591 "method": "sock_impl_set_options", 00:14:26.591 "params": { 00:14:26.591 "impl_name": "uring", 00:14:26.591 "recv_buf_size": 2097152, 00:14:26.591 "send_buf_size": 2097152, 00:14:26.591 "enable_recv_pipe": true, 00:14:26.591 "enable_quickack": false, 00:14:26.591 "enable_placement_id": 0, 00:14:26.591 "enable_zerocopy_send_server": false, 00:14:26.591 "enable_zerocopy_send_client": false, 00:14:26.591 "zerocopy_threshold": 0, 00:14:26.591 "tls_version": 0, 00:14:26.591 "enable_ktls": false 00:14:26.591 } 00:14:26.591 } 00:14:26.591 ] 00:14:26.591 }, 00:14:26.591 { 00:14:26.591 "subsystem": "vmd", 00:14:26.591 "config": [] 00:14:26.591 }, 00:14:26.591 { 00:14:26.591 "subsystem": "accel", 00:14:26.591 "config": [ 00:14:26.591 { 00:14:26.591 "method": "accel_set_options", 00:14:26.591 "params": { 00:14:26.591 "small_cache_size": 128, 00:14:26.591 "large_cache_size": 16, 00:14:26.591 "task_count": 2048, 00:14:26.591 "sequence_count": 2048, 00:14:26.591 "buf_count": 2048 00:14:26.591 } 00:14:26.591 } 00:14:26.591 ] 00:14:26.591 }, 00:14:26.591 { 00:14:26.591 "subsystem": "bdev", 00:14:26.591 "config": [ 00:14:26.591 { 00:14:26.591 "method": "bdev_set_options", 00:14:26.591 "params": { 00:14:26.591 "bdev_io_pool_size": 65535, 00:14:26.591 "bdev_io_cache_size": 256, 00:14:26.591 "bdev_auto_examine": true, 00:14:26.591 "iobuf_small_cache_size": 128, 00:14:26.591 "iobuf_large_cache_size": 16 00:14:26.591 } 00:14:26.591 }, 00:14:26.591 { 00:14:26.591 "method": "bdev_raid_set_options", 00:14:26.591 "params": { 00:14:26.591 "process_window_size_kb": 1024, 00:14:26.591 "process_max_bandwidth_mb_sec": 0 00:14:26.591 } 00:14:26.591 }, 00:14:26.591 { 00:14:26.591 "method": "bdev_iscsi_set_options", 00:14:26.591 "params": { 00:14:26.591 "timeout_sec": 30 00:14:26.591 } 00:14:26.591 }, 00:14:26.591 { 00:14:26.591 "method": "bdev_nvme_set_options", 00:14:26.591 "params": { 00:14:26.591 "action_on_timeout": "none", 00:14:26.591 "timeout_us": 0, 00:14:26.591 "timeout_admin_us": 0, 00:14:26.591 "keep_alive_timeout_ms": 10000, 00:14:26.591 "arbitration_burst": 0, 00:14:26.591 "low_priority_weight": 0, 00:14:26.591 "medium_priority_weight": 0, 00:14:26.591 "high_priority_weight": 0, 00:14:26.591 "nvme_adminq_poll_period_us": 10000, 00:14:26.591 "nvme_ioq_poll_period_us": 0, 00:14:26.591 "io_queue_requests": 0, 00:14:26.591 "delay_cmd_submit": true, 00:14:26.591 "transport_retry_count": 4, 00:14:26.591 "bdev_retry_count": 3, 00:14:26.591 "transport_ack_timeout": 0, 00:14:26.591 "ctrlr_loss_timeout_sec": 0, 00:14:26.591 "reconnect_delay_sec": 0, 00:14:26.591 "fast_io_fail_timeout_sec": 0, 00:14:26.591 "disable_auto_failback": false, 00:14:26.591 "generate_uuids": false, 00:14:26.591 "transport_tos": 0, 00:14:26.591 "nvme_error_stat": false, 00:14:26.591 "rdma_srq_size": 0, 00:14:26.591 "io_path_stat": false, 00:14:26.591 "allow_accel_sequence": false, 00:14:26.591 "rdma_max_cq_size": 0, 00:14:26.591 "rdma_cm_event_timeout_ms": 0, 00:14:26.591 "dhchap_digests": [ 00:14:26.591 "sha256", 00:14:26.591 "sha384", 00:14:26.592 "sha512" 00:14:26.592 ], 00:14:26.592 "dhchap_dhgroups": [ 00:14:26.592 "null", 00:14:26.592 "ffdhe2048", 00:14:26.592 "ffdhe3072", 00:14:26.592 "ffdhe4096", 00:14:26.592 "ffdhe6144", 00:14:26.592 "ffdhe8192" 00:14:26.592 ] 00:14:26.592 } 00:14:26.592 }, 00:14:26.592 { 00:14:26.592 "method": "bdev_nvme_set_hotplug", 00:14:26.592 "params": { 00:14:26.592 "period_us": 100000, 00:14:26.592 "enable": false 00:14:26.592 } 00:14:26.592 }, 00:14:26.592 { 00:14:26.592 "method": "bdev_malloc_create", 00:14:26.592 "params": { 00:14:26.592 "name": "malloc0", 00:14:26.592 "num_blocks": 8192, 00:14:26.592 "block_size": 4096, 00:14:26.592 "physical_block_size": 4096, 00:14:26.592 "uuid": "06743a4c-d933-45fe-8b5a-431b43808a84", 00:14:26.592 "optimal_io_boundary": 0, 00:14:26.592 "md_size": 0, 00:14:26.592 "dif_type": 0, 00:14:26.592 "dif_is_head_of_md": false, 00:14:26.592 "dif_pi_format": 0 00:14:26.592 } 00:14:26.592 }, 00:14:26.592 { 00:14:26.592 "method": "bdev_wait_for_examine" 00:14:26.592 } 00:14:26.592 ] 00:14:26.592 }, 00:14:26.592 { 00:14:26.592 "subsystem": "nbd", 00:14:26.592 "config": [] 00:14:26.592 }, 00:14:26.592 { 00:14:26.592 "subsystem": "scheduler", 00:14:26.592 "config": [ 00:14:26.592 { 00:14:26.592 "method": "framework_set_scheduler", 00:14:26.592 "params": { 00:14:26.592 "name": "static" 00:14:26.592 } 00:14:26.592 } 00:14:26.592 ] 00:14:26.592 }, 00:14:26.592 { 00:14:26.592 "subsystem": "nvmf", 00:14:26.592 "config": [ 00:14:26.592 { 00:14:26.592 "method": "nvmf_set_config", 00:14:26.592 "params": { 00:14:26.592 "discovery_filter": "match_any", 00:14:26.592 "admin_cmd_passthru": { 00:14:26.592 "identify_ctrlr": false 00:14:26.592 }, 00:14:26.592 "dhchap_digests": [ 00:14:26.592 "sha256", 00:14:26.592 "sha384", 00:14:26.592 "sha512" 00:14:26.592 ], 00:14:26.592 "dhchap_dhgroups": [ 00:14:26.592 "null", 00:14:26.592 "ffdhe2048", 00:14:26.592 "ffdhe3072", 00:14:26.592 "ffdhe4096", 00:14:26.592 "ffdhe6144", 00:14:26.592 "ffdhe8192" 00:14:26.592 ] 00:14:26.592 } 00:14:26.592 }, 00:14:26.592 { 00:14:26.592 "method": "nvmf_set_max_subsystems", 00:14:26.592 "params": { 00:14:26.592 "max_subsystems": 1024 00:14:26.592 } 00:14:26.592 }, 00:14:26.592 { 00:14:26.592 "method": "nvmf_set_crdt", 00:14:26.592 "params": { 00:14:26.592 "crdt1": 0, 00:14:26.592 "crdt2": 0, 00:14:26.592 "crdt3": 0 00:14:26.592 } 00:14:26.592 }, 00:14:26.592 { 00:14:26.592 "method": "nvmf_create_transport", 00:14:26.592 "params": { 00:14:26.592 "trtype": "TCP", 00:14:26.592 "max_queue_depth": 128, 00:14:26.592 "max_io_qpairs_per_ctrlr": 127, 00:14:26.592 "in_capsule_data_size": 4096, 00:14:26.592 "max_io_size": 131072, 00:14:26.592 "io_unit_size": 131072, 00:14:26.592 "max_aq_depth": 128, 00:14:26.592 "num_shared_buffers": 511, 00:14:26.592 "buf_cache_size": 4294967295, 00:14:26.592 "dif_insert_or_strip": false, 00:14:26.592 "zcopy": false, 00:14:26.592 "c2h_success": false, 00:14:26.592 "sock_priority": 0, 00:14:26.592 "abort_timeout_sec": 1, 00:14:26.592 "ack_timeout": 0, 00:14:26.592 "data_wr_pool_size": 0 00:14:26.592 } 00:14:26.592 }, 00:14:26.592 { 00:14:26.592 "method": "nvmf_create_subsystem", 00:14:26.592 "params": { 00:14:26.592 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.592 "allow_any_host": false, 00:14:26.592 "serial_number": "00000000000000000000", 00:14:26.592 "model_number": "SPDK bdev Controller", 00:14:26.592 "max_namespaces": 32, 00:14:26.592 "min_cntlid": 1, 00:14:26.592 "max_cntlid": 65519, 00:14:26.592 "ana_reporting": false 00:14:26.592 } 00:14:26.592 }, 00:14:26.592 { 00:14:26.592 "method": "nvmf_subsystem_add_host", 00:14:26.592 "params": { 00:14:26.592 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.592 "host": "nqn.2016-06.io.spdk:host1", 00:14:26.592 "psk": "key0" 00:14:26.592 } 00:14:26.592 }, 00:14:26.592 { 00:14:26.592 "method": "nvmf_subsystem_add_ns", 00:14:26.592 "params": { 00:14:26.592 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.592 "namespace": { 00:14:26.592 "nsid": 1, 00:14:26.592 "bdev_name": "malloc0", 00:14:26.592 "nguid": "06743A4CD93345FE8B5A431B43808A84", 00:14:26.592 "uuid": "06743a4c-d933-45fe-8b5a-431b43808a84", 00:14:26.592 "no_auto_visible": false 00:14:26.592 } 00:14:26.592 } 00:14:26.592 }, 00:14:26.592 { 00:14:26.592 "method": "nvmf_subsystem_add_listener", 00:14:26.592 "params": { 00:14:26.592 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.592 "listen_address": { 00:14:26.592 "trtype": "TCP", 00:14:26.592 "adrfam": "IPv4", 00:14:26.592 "traddr": "10.0.0.3", 00:14:26.592 "trsvcid": "4420" 00:14:26.592 }, 00:14:26.592 "secure_channel": false, 00:14:26.592 "sock_impl": "ssl" 00:14:26.592 } 00:14:26.592 } 00:14:26.592 ] 00:14:26.592 } 00:14:26.592 ] 00:14:26.592 }' 00:14:26.592 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:26.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.592 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84179 00:14:26.592 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:26.592 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84179 00:14:26.592 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84179 ']' 00:14:26.592 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.592 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:26.592 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.592 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:26.592 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:26.592 [2024-11-19 01:55:37.048467] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:14:26.592 [2024-11-19 01:55:37.048751] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.592 [2024-11-19 01:55:37.194724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.900 [2024-11-19 01:55:37.215582] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:26.900 [2024-11-19 01:55:37.215852] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:26.900 [2024-11-19 01:55:37.215994] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:26.900 [2024-11-19 01:55:37.216007] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:26.900 [2024-11-19 01:55:37.216014] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:26.900 [2024-11-19 01:55:37.216389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.900 [2024-11-19 01:55:37.358366] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:26.900 [2024-11-19 01:55:37.413321] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:26.900 [2024-11-19 01:55:37.445279] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:26.900 [2024-11-19 01:55:37.445654] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:27.467 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:27.467 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:27.467 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:27.467 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:27.467 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:27.725 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:27.725 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=84211 00:14:27.725 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 84211 /var/tmp/bdevperf.sock 00:14:27.725 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84211 ']' 00:14:27.725 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:27.725 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:27.725 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:14:27.725 "subsystems": [ 00:14:27.725 { 00:14:27.725 "subsystem": "keyring", 00:14:27.725 "config": [ 00:14:27.725 { 00:14:27.725 "method": "keyring_file_add_key", 00:14:27.725 "params": { 00:14:27.725 "name": "key0", 00:14:27.725 "path": "/tmp/tmp.NIhhAywiM7" 00:14:27.725 } 00:14:27.725 } 00:14:27.725 ] 00:14:27.725 }, 00:14:27.725 { 00:14:27.725 "subsystem": "iobuf", 00:14:27.725 "config": [ 00:14:27.725 { 00:14:27.725 "method": "iobuf_set_options", 00:14:27.725 "params": { 00:14:27.725 "small_pool_count": 8192, 00:14:27.725 "large_pool_count": 1024, 00:14:27.725 "small_bufsize": 8192, 00:14:27.725 "large_bufsize": 135168, 00:14:27.725 "enable_numa": false 00:14:27.725 } 00:14:27.725 } 00:14:27.725 ] 00:14:27.725 }, 00:14:27.725 { 00:14:27.725 "subsystem": "sock", 00:14:27.726 "config": [ 00:14:27.726 { 00:14:27.726 "method": "sock_set_default_impl", 00:14:27.726 "params": { 00:14:27.726 "impl_name": "uring" 00:14:27.726 } 00:14:27.726 }, 00:14:27.726 { 00:14:27.726 "method": "sock_impl_set_options", 00:14:27.726 "params": { 00:14:27.726 "impl_name": "ssl", 00:14:27.726 "recv_buf_size": 4096, 00:14:27.726 "send_buf_size": 4096, 00:14:27.726 "enable_recv_pipe": true, 00:14:27.726 "enable_quickack": false, 00:14:27.726 "enable_placement_id": 0, 00:14:27.726 "enable_zerocopy_send_server": true, 00:14:27.726 "enable_zerocopy_send_client": false, 00:14:27.726 "zerocopy_threshold": 0, 00:14:27.726 "tls_version": 0, 00:14:27.726 "enable_ktls": false 00:14:27.726 } 00:14:27.726 }, 00:14:27.726 { 00:14:27.726 "method": "sock_impl_set_options", 00:14:27.726 "params": { 00:14:27.726 "impl_name": "posix", 00:14:27.726 "recv_buf_size": 2097152, 00:14:27.726 "send_buf_size": 2097152, 00:14:27.726 "enable_recv_pipe": true, 00:14:27.726 "enable_quickack": false, 00:14:27.726 "enable_placement_id": 0, 00:14:27.726 "enable_zerocopy_send_server": true, 00:14:27.726 "enable_zerocopy_send_client": false, 00:14:27.726 "zerocopy_threshold": 0, 00:14:27.726 "tls_version": 0, 00:14:27.726 "enable_ktls": false 00:14:27.726 } 00:14:27.726 }, 00:14:27.726 { 00:14:27.726 "method": "sock_impl_set_options", 00:14:27.726 "params": { 00:14:27.726 "impl_name": "uring", 00:14:27.726 "recv_buf_size": 2097152, 00:14:27.726 "send_buf_size": 2097152, 00:14:27.726 "enable_recv_pipe": true, 00:14:27.726 "enable_quickack": false, 00:14:27.726 "enable_placement_id": 0, 00:14:27.726 "enable_zerocopy_send_server": false, 00:14:27.726 "enable_zerocopy_send_client": false, 00:14:27.726 "zerocopy_threshold": 0, 00:14:27.726 "tls_version": 0, 00:14:27.726 "enable_ktls": false 00:14:27.726 } 00:14:27.726 } 00:14:27.726 ] 00:14:27.726 }, 00:14:27.726 { 00:14:27.726 "subsystem": "vmd", 00:14:27.726 "config": [] 00:14:27.726 }, 00:14:27.726 { 00:14:27.726 "subsystem": "accel", 00:14:27.726 "config": [ 00:14:27.726 { 00:14:27.726 "method": "accel_set_options", 00:14:27.726 "params": { 00:14:27.726 "small_cache_size": 128, 00:14:27.726 "large_cache_size": 16, 00:14:27.726 "task_count": 2048, 00:14:27.726 "sequence_count": 2048, 00:14:27.726 "buf_count": 2048 00:14:27.726 } 00:14:27.726 } 00:14:27.726 ] 00:14:27.726 }, 00:14:27.726 { 00:14:27.726 "subsystem": "bdev", 00:14:27.726 "config": [ 00:14:27.726 { 00:14:27.726 "method": "bdev_set_options", 00:14:27.726 "params": { 00:14:27.726 "bdev_io_pool_size": 65535, 00:14:27.726 "bdev_io_cache_size": 256, 00:14:27.726 "bdev_auto_examine": true, 00:14:27.726 "iobuf_small_cache_size": 128, 00:14:27.726 "iobuf_large_cache_size": 16 00:14:27.726 } 00:14:27.726 }, 00:14:27.726 { 00:14:27.726 "method": "bdev_raid_set_options", 00:14:27.726 "params": { 00:14:27.726 "process_window_size_kb": 1024, 00:14:27.726 "process_max_bandwidth_mb_sec": 0 00:14:27.726 } 00:14:27.726 }, 00:14:27.726 { 00:14:27.726 "method": "bdev_iscsi_set_options", 00:14:27.726 "params": { 00:14:27.726 "timeout_sec": 30 00:14:27.726 } 00:14:27.726 }, 00:14:27.726 { 00:14:27.726 "method": "bdev_nvme_set_options", 00:14:27.726 "params": { 00:14:27.726 "action_on_timeout": "none", 00:14:27.726 "timeout_us": 0, 00:14:27.726 "timeout_admin_us": 0, 00:14:27.726 "keep_alive_timeout_ms": 10000, 00:14:27.726 "arbitration_burst": 0, 00:14:27.726 "low_priority_weight": 0, 00:14:27.726 "medium_priority_weight": 0, 00:14:27.726 "high_priority_weight": 0, 00:14:27.726 "nvme_adminq_poll_period_us": 10000, 00:14:27.726 "nvme_ioq_poll_period_us": 0, 00:14:27.726 "io_queue_requests": 512, 00:14:27.726 "delay_cmd_submit": true, 00:14:27.726 "transport_retry_count": 4, 00:14:27.726 "bdev_retry_count": 3, 00:14:27.726 "transport_ack_timeout": 0, 00:14:27.726 "ctrlr_loss_timeout_sec": 0, 00:14:27.726 "reconnect_delay_sec": 0, 00:14:27.726 "fast_io_fail_timeout_sec": 0, 00:14:27.726 "disable_auto_failback": false, 00:14:27.726 "generate_uuids": false, 00:14:27.726 "transport_tos": 0, 00:14:27.726 "nvme_error_stat": false, 00:14:27.726 "rdma_srq_size": 0, 00:14:27.726 "io_path_stat": false, 00:14:27.726 "allow_accel_sequence": false, 00:14:27.726 "rdma_max_cq_size": 0, 00:14:27.726 "rdma_cm_event_timeout_ms": 0, 00:14:27.726 "dhchap_digests": [ 00:14:27.726 "sha256", 00:14:27.726 "sha384", 00:14:27.726 "sha512" 00:14:27.726 ], 00:14:27.726 "dhchap_dhgroups": [ 00:14:27.726 "null", 00:14:27.726 "ffdhe2048", 00:14:27.726 "ffdhe3072", 00:14:27.726 "ffdhe4096", 00:14:27.726 "ffdhe6144", 00:14:27.726 "ffdhe8192" 00:14:27.726 ] 00:14:27.726 } 00:14:27.726 }, 00:14:27.726 { 00:14:27.726 "method": "bdev_nvme_attach_controller", 00:14:27.726 "params": { 00:14:27.726 "name": "nvme0", 00:14:27.726 "trtype": "TCP", 00:14:27.726 "adrfam": "IPv4", 00:14:27.726 "traddr": "10.0.0.3", 00:14:27.726 "trsvcid": "4420", 00:14:27.726 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:27.726 "prchk_reftag": false, 00:14:27.726 "prchk_guard": false, 00:14:27.726 "ctrlr_loss_timeout_sec": 0, 00:14:27.726 "reconnect_delay_sec": 0, 00:14:27.726 "fast_io_fail_timeout_sec": 0, 00:14:27.726 "psk": "key0", 00:14:27.726 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:27.726 "hdgst": false, 00:14:27.726 "ddgst": false, 00:14:27.726 "multipath": "multipath" 00:14:27.726 } 00:14:27.726 }, 00:14:27.726 { 00:14:27.726 "method": "bdev_nvme_set_hotplug", 00:14:27.726 "params": { 00:14:27.726 "period_us": 100000, 00:14:27.726 "enable": false 00:14:27.726 } 00:14:27.726 }, 00:14:27.726 { 00:14:27.726 "method": "bdev_enable_histogram", 00:14:27.726 "params": { 00:14:27.726 "name": "nvme0n1", 00:14:27.726 "enable": true 00:14:27.726 } 00:14:27.726 }, 00:14:27.726 { 00:14:27.726 "method": "bdev_wait_for_examine" 00:14:27.726 } 00:14:27.726 ] 00:14:27.726 }, 00:14:27.726 { 00:14:27.726 "subsystem": "nbd", 00:14:27.726 "config": [] 00:14:27.726 } 00:14:27.726 ] 00:14:27.726 }' 00:14:27.726 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:27.726 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:27.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:27.726 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:27.726 01:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:27.726 [2024-11-19 01:55:38.133303] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:14:27.726 [2024-11-19 01:55:38.133601] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84211 ] 00:14:27.726 [2024-11-19 01:55:38.280171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.726 [2024-11-19 01:55:38.304773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.985 [2024-11-19 01:55:38.416994] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:27.985 [2024-11-19 01:55:38.446760] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:28.921 01:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:28.921 01:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:28.921 01:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:28.921 01:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:14:29.179 01:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.179 01:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:29.179 Running I/O for 1 seconds... 00:14:30.374 3328.00 IOPS, 13.00 MiB/s 00:14:30.374 Latency(us) 00:14:30.374 [2024-11-19T01:55:40.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.374 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:30.374 Verification LBA range: start 0x0 length 0x2000 00:14:30.374 nvme0n1 : 1.03 3357.19 13.11 0.00 0.00 37722.15 8043.05 23831.27 00:14:30.374 [2024-11-19T01:55:40.989Z] =================================================================================================================== 00:14:30.374 [2024-11-19T01:55:40.989Z] Total : 3357.19 13.11 0.00 0.00 37722.15 8043.05 23831.27 00:14:30.374 { 00:14:30.374 "results": [ 00:14:30.374 { 00:14:30.374 "job": "nvme0n1", 00:14:30.374 "core_mask": "0x2", 00:14:30.374 "workload": "verify", 00:14:30.374 "status": "finished", 00:14:30.374 "verify_range": { 00:14:30.374 "start": 0, 00:14:30.374 "length": 8192 00:14:30.374 }, 00:14:30.374 "queue_depth": 128, 00:14:30.374 "io_size": 4096, 00:14:30.374 "runtime": 1.029433, 00:14:30.374 "iops": 3357.187888866978, 00:14:30.374 "mibps": 13.114015190886633, 00:14:30.374 "io_failed": 0, 00:14:30.374 "io_timeout": 0, 00:14:30.374 "avg_latency_us": 37722.15164983165, 00:14:30.374 "min_latency_us": 8043.054545454545, 00:14:30.374 "max_latency_us": 23831.272727272728 00:14:30.374 } 00:14:30.374 ], 00:14:30.374 "core_count": 1 00:14:30.374 } 00:14:30.374 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:14:30.374 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:14:30.374 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:14:30.374 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:14:30.374 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:14:30.374 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:14:30.374 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:30.374 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:14:30.374 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:14:30.374 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:14:30.374 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:30.374 nvmf_trace.0 00:14:30.374 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:14:30.374 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 84211 00:14:30.374 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84211 ']' 00:14:30.374 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84211 00:14:30.374 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:30.374 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:30.374 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84211 00:14:30.374 killing process with pid 84211 00:14:30.374 Received shutdown signal, test time was about 1.000000 seconds 00:14:30.374 00:14:30.374 Latency(us) 00:14:30.374 [2024-11-19T01:55:40.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.374 [2024-11-19T01:55:40.989Z] =================================================================================================================== 00:14:30.374 [2024-11-19T01:55:40.989Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:30.374 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:30.374 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:30.374 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84211' 00:14:30.374 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84211 00:14:30.374 01:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84211 00:14:30.633 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:14:30.633 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:30.633 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:14:30.633 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:30.633 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:14:30.633 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:30.633 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:30.633 rmmod nvme_tcp 00:14:30.633 rmmod nvme_fabrics 00:14:30.633 rmmod nvme_keyring 00:14:30.633 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:30.633 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:14:30.633 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:14:30.633 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 84179 ']' 00:14:30.633 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 84179 00:14:30.633 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84179 ']' 00:14:30.633 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84179 00:14:30.633 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:30.633 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:30.633 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84179 00:14:30.633 killing process with pid 84179 00:14:30.633 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:30.634 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:30.634 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84179' 00:14:30.634 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84179 00:14:30.634 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84179 00:14:30.893 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:30.893 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:30.893 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:30.893 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:14:30.893 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:14:30.893 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:30.893 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:14:30.893 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:30.893 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:30.893 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:30.893 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:30.893 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:30.893 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:30.893 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:30.893 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:30.893 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:30.893 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:30.893 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:30.893 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:30.893 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:30.893 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:30.893 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:30.893 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:30.893 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.893 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:30.893 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.bLNyANlV1d /tmp/tmp.8oahl20J48 /tmp/tmp.NIhhAywiM7 00:14:31.152 ************************************ 00:14:31.152 END TEST nvmf_tls 00:14:31.152 ************************************ 00:14:31.152 00:14:31.152 real 1m19.674s 00:14:31.152 user 2m10.436s 00:14:31.152 sys 0m26.254s 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:31.152 ************************************ 00:14:31.152 START TEST nvmf_fips 00:14:31.152 ************************************ 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:31.152 * Looking for test storage... 00:14:31.152 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:31.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.152 --rc genhtml_branch_coverage=1 00:14:31.152 --rc genhtml_function_coverage=1 00:14:31.152 --rc genhtml_legend=1 00:14:31.152 --rc geninfo_all_blocks=1 00:14:31.152 --rc geninfo_unexecuted_blocks=1 00:14:31.152 00:14:31.152 ' 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:31.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.152 --rc genhtml_branch_coverage=1 00:14:31.152 --rc genhtml_function_coverage=1 00:14:31.152 --rc genhtml_legend=1 00:14:31.152 --rc geninfo_all_blocks=1 00:14:31.152 --rc geninfo_unexecuted_blocks=1 00:14:31.152 00:14:31.152 ' 00:14:31.152 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:31.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.152 --rc genhtml_branch_coverage=1 00:14:31.152 --rc genhtml_function_coverage=1 00:14:31.152 --rc genhtml_legend=1 00:14:31.152 --rc geninfo_all_blocks=1 00:14:31.152 --rc geninfo_unexecuted_blocks=1 00:14:31.152 00:14:31.152 ' 00:14:31.153 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:31.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.153 --rc genhtml_branch_coverage=1 00:14:31.153 --rc genhtml_function_coverage=1 00:14:31.153 --rc genhtml_legend=1 00:14:31.153 --rc geninfo_all_blocks=1 00:14:31.153 --rc geninfo_unexecuted_blocks=1 00:14:31.153 00:14:31.153 ' 00:14:31.153 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:31.153 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:14:31.153 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.153 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.153 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.153 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.153 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.153 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.153 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.153 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.153 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.153 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.412 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:31.413 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:14:31.413 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:14:31.414 Error setting digest 00:14:31.414 40A2EDCA2A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:14:31.414 40A2EDCA2A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:31.414 Cannot find device "nvmf_init_br" 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:31.414 Cannot find device "nvmf_init_br2" 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:31.414 Cannot find device "nvmf_tgt_br" 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:14:31.414 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:31.414 Cannot find device "nvmf_tgt_br2" 00:14:31.414 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:14:31.414 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:31.414 Cannot find device "nvmf_init_br" 00:14:31.414 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:14:31.414 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:31.673 Cannot find device "nvmf_init_br2" 00:14:31.673 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:14:31.673 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:31.673 Cannot find device "nvmf_tgt_br" 00:14:31.673 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:14:31.673 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:31.673 Cannot find device "nvmf_tgt_br2" 00:14:31.673 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:14:31.673 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:31.673 Cannot find device "nvmf_br" 00:14:31.673 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:14:31.673 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:31.673 Cannot find device "nvmf_init_if" 00:14:31.673 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:14:31.673 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:31.673 Cannot find device "nvmf_init_if2" 00:14:31.673 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:14:31.673 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:31.673 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:31.673 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:14:31.674 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:31.674 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:31.674 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:14:31.674 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:31.674 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:31.674 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:31.674 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:31.674 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:31.674 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:31.674 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:31.674 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:31.674 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:31.674 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:31.674 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:31.674 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:31.674 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:31.674 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:31.674 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:31.674 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:31.674 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:31.933 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:31.933 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:14:31.933 00:14:31.933 --- 10.0.0.3 ping statistics --- 00:14:31.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.933 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:31.933 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:31.933 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:14:31.933 00:14:31.933 --- 10.0.0.4 ping statistics --- 00:14:31.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.933 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:31.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:31.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:14:31.933 00:14:31.933 --- 10.0.0.1 ping statistics --- 00:14:31.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.933 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:31.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:31.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:14:31.933 00:14:31.933 --- 10.0.0.2 ping statistics --- 00:14:31.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.933 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=84530 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 84530 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 84530 ']' 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:31.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:31.933 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:31.933 [2024-11-19 01:55:42.543305] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:14:31.933 [2024-11-19 01:55:42.543950] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:32.192 [2024-11-19 01:55:42.698548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.192 [2024-11-19 01:55:42.720444] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:32.192 [2024-11-19 01:55:42.720513] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:32.192 [2024-11-19 01:55:42.720528] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:32.192 [2024-11-19 01:55:42.720539] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:32.192 [2024-11-19 01:55:42.720548] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:32.192 [2024-11-19 01:55:42.720895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.192 [2024-11-19 01:55:42.753021] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:32.192 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:32.192 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:14:32.192 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:32.192 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:32.192 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:32.450 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.450 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:14:32.450 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:32.450 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:14:32.450 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.6DS 00:14:32.450 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:32.450 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.6DS 00:14:32.450 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.6DS 00:14:32.450 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.6DS 00:14:32.450 01:55:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:32.708 [2024-11-19 01:55:43.151581] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:32.708 [2024-11-19 01:55:43.167577] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:32.708 [2024-11-19 01:55:43.167849] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:32.708 malloc0 00:14:32.708 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:32.708 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=84559 00:14:32.708 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 84559 /var/tmp/bdevperf.sock 00:14:32.708 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:32.708 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 84559 ']' 00:14:32.708 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:32.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:32.708 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:32.708 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:32.708 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:32.708 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:32.708 [2024-11-19 01:55:43.317686] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:14:32.708 [2024-11-19 01:55:43.318074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84559 ] 00:14:32.966 [2024-11-19 01:55:43.471684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.966 [2024-11-19 01:55:43.497373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:32.966 [2024-11-19 01:55:43.532736] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:32.966 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:32.966 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:14:32.966 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.6DS 00:14:33.532 01:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:33.791 [2024-11-19 01:55:44.187426] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:33.791 TLSTESTn1 00:14:33.791 01:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:33.791 Running I/O for 10 seconds... 00:14:36.096 3301.00 IOPS, 12.89 MiB/s [2024-11-19T01:55:47.647Z] 3386.00 IOPS, 13.23 MiB/s [2024-11-19T01:55:48.584Z] 3505.33 IOPS, 13.69 MiB/s [2024-11-19T01:55:49.519Z] 3728.50 IOPS, 14.56 MiB/s [2024-11-19T01:55:50.455Z] 3859.80 IOPS, 15.08 MiB/s [2024-11-19T01:55:51.833Z] 3943.50 IOPS, 15.40 MiB/s [2024-11-19T01:55:52.769Z] 4010.57 IOPS, 15.67 MiB/s [2024-11-19T01:55:53.706Z] 4056.25 IOPS, 15.84 MiB/s [2024-11-19T01:55:54.644Z] 4090.33 IOPS, 15.98 MiB/s [2024-11-19T01:55:54.644Z] 4119.50 IOPS, 16.09 MiB/s 00:14:44.029 Latency(us) 00:14:44.029 [2024-11-19T01:55:54.644Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.029 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:44.029 Verification LBA range: start 0x0 length 0x2000 00:14:44.029 TLSTESTn1 : 10.01 4126.23 16.12 0.00 0.00 30969.59 4349.21 36223.53 00:14:44.029 [2024-11-19T01:55:54.644Z] =================================================================================================================== 00:14:44.029 [2024-11-19T01:55:54.644Z] Total : 4126.23 16.12 0.00 0.00 30969.59 4349.21 36223.53 00:14:44.029 { 00:14:44.029 "results": [ 00:14:44.029 { 00:14:44.029 "job": "TLSTESTn1", 00:14:44.029 "core_mask": "0x4", 00:14:44.029 "workload": "verify", 00:14:44.029 "status": "finished", 00:14:44.029 "verify_range": { 00:14:44.029 "start": 0, 00:14:44.029 "length": 8192 00:14:44.029 }, 00:14:44.029 "queue_depth": 128, 00:14:44.029 "io_size": 4096, 00:14:44.029 "runtime": 10.014473, 00:14:44.029 "iops": 4126.228110056316, 00:14:44.029 "mibps": 16.118078554907484, 00:14:44.029 "io_failed": 0, 00:14:44.029 "io_timeout": 0, 00:14:44.029 "avg_latency_us": 30969.594835416752, 00:14:44.029 "min_latency_us": 4349.2072727272725, 00:14:44.029 "max_latency_us": 36223.534545454546 00:14:44.029 } 00:14:44.029 ], 00:14:44.029 "core_count": 1 00:14:44.029 } 00:14:44.029 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:14:44.029 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:14:44.029 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:14:44.029 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:14:44.029 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:14:44.029 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:44.029 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:14:44.029 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:14:44.029 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:14:44.029 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:44.029 nvmf_trace.0 00:14:44.029 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:14:44.029 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 84559 00:14:44.029 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 84559 ']' 00:14:44.029 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 84559 00:14:44.029 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:14:44.029 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:44.029 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84559 00:14:44.029 killing process with pid 84559 00:14:44.029 Received shutdown signal, test time was about 10.000000 seconds 00:14:44.029 00:14:44.029 Latency(us) 00:14:44.029 [2024-11-19T01:55:54.644Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.029 [2024-11-19T01:55:54.644Z] =================================================================================================================== 00:14:44.029 [2024-11-19T01:55:54.644Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:44.029 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:44.029 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:44.029 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84559' 00:14:44.029 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 84559 00:14:44.029 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 84559 00:14:44.324 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:14:44.324 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:44.324 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:14:44.324 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:44.324 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:14:44.324 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:44.324 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:44.324 rmmod nvme_tcp 00:14:44.324 rmmod nvme_fabrics 00:14:44.324 rmmod nvme_keyring 00:14:44.324 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:44.324 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:14:44.324 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:14:44.324 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 84530 ']' 00:14:44.324 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 84530 00:14:44.324 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 84530 ']' 00:14:44.324 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 84530 00:14:44.324 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:14:44.324 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:44.324 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84530 00:14:44.324 killing process with pid 84530 00:14:44.324 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:44.324 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:44.324 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84530' 00:14:44.324 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 84530 00:14:44.324 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 84530 00:14:44.621 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:44.621 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:44.621 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:44.621 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:14:44.621 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:14:44.621 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:44.621 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:14:44.621 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:44.621 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:44.621 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:44.621 01:55:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:44.621 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:44.621 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:44.621 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:44.621 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:44.621 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:44.621 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:44.621 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:44.621 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:44.621 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:44.621 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:44.621 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:44.621 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:44.621 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.621 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:44.621 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.6DS 00:14:44.880 ************************************ 00:14:44.880 END TEST nvmf_fips 00:14:44.880 ************************************ 00:14:44.880 00:14:44.880 real 0m13.673s 00:14:44.880 user 0m18.583s 00:14:44.880 sys 0m5.650s 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:44.880 ************************************ 00:14:44.880 START TEST nvmf_control_msg_list 00:14:44.880 ************************************ 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:14:44.880 * Looking for test storage... 00:14:44.880 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:44.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.880 --rc genhtml_branch_coverage=1 00:14:44.880 --rc genhtml_function_coverage=1 00:14:44.880 --rc genhtml_legend=1 00:14:44.880 --rc geninfo_all_blocks=1 00:14:44.880 --rc geninfo_unexecuted_blocks=1 00:14:44.880 00:14:44.880 ' 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:44.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.880 --rc genhtml_branch_coverage=1 00:14:44.880 --rc genhtml_function_coverage=1 00:14:44.880 --rc genhtml_legend=1 00:14:44.880 --rc geninfo_all_blocks=1 00:14:44.880 --rc geninfo_unexecuted_blocks=1 00:14:44.880 00:14:44.880 ' 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:44.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.880 --rc genhtml_branch_coverage=1 00:14:44.880 --rc genhtml_function_coverage=1 00:14:44.880 --rc genhtml_legend=1 00:14:44.880 --rc geninfo_all_blocks=1 00:14:44.880 --rc geninfo_unexecuted_blocks=1 00:14:44.880 00:14:44.880 ' 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:44.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:44.880 --rc genhtml_branch_coverage=1 00:14:44.880 --rc genhtml_function_coverage=1 00:14:44.880 --rc genhtml_legend=1 00:14:44.880 --rc geninfo_all_blocks=1 00:14:44.880 --rc geninfo_unexecuted_blocks=1 00:14:44.880 00:14:44.880 ' 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:44.880 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:44.881 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:44.881 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:44.881 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:45.141 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:45.141 Cannot find device "nvmf_init_br" 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:45.141 Cannot find device "nvmf_init_br2" 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:45.141 Cannot find device "nvmf_tgt_br" 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:45.141 Cannot find device "nvmf_tgt_br2" 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:45.141 Cannot find device "nvmf_init_br" 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:45.141 Cannot find device "nvmf_init_br2" 00:14:45.141 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:14:45.142 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:45.142 Cannot find device "nvmf_tgt_br" 00:14:45.142 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:14:45.142 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:45.142 Cannot find device "nvmf_tgt_br2" 00:14:45.142 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:14:45.142 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:45.142 Cannot find device "nvmf_br" 00:14:45.142 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:14:45.142 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:45.142 Cannot find device "nvmf_init_if" 00:14:45.142 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:14:45.142 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:45.142 Cannot find device "nvmf_init_if2" 00:14:45.142 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:14:45.142 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:45.142 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:45.142 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:14:45.142 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:45.142 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:45.142 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:14:45.142 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:45.142 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:45.142 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:45.142 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:45.142 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:45.142 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:45.142 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:45.142 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:45.142 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:45.142 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:45.142 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:45.142 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:45.142 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:45.142 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:45.142 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:45.401 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:45.401 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:14:45.401 00:14:45.401 --- 10.0.0.3 ping statistics --- 00:14:45.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.401 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:45.401 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:45.401 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:14:45.401 00:14:45.401 --- 10.0.0.4 ping statistics --- 00:14:45.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.401 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:45.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:45.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:14:45.401 00:14:45.401 --- 10.0.0.1 ping statistics --- 00:14:45.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.401 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:45.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:45.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:14:45.401 00:14:45.401 --- 10.0.0.2 ping statistics --- 00:14:45.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.401 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=84954 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 84954 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 84954 ']' 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:45.401 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:45.401 [2024-11-19 01:55:55.968432] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:14:45.401 [2024-11-19 01:55:55.968561] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.661 [2024-11-19 01:55:56.119681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.661 [2024-11-19 01:55:56.141346] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.661 [2024-11-19 01:55:56.141414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.661 [2024-11-19 01:55:56.141440] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:45.661 [2024-11-19 01:55:56.141450] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:45.661 [2024-11-19 01:55:56.141459] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.661 [2024-11-19 01:55:56.141860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.661 [2024-11-19 01:55:56.174150] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:45.661 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:45.661 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:14:45.661 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:45.661 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:45.661 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:45.661 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.661 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:14:45.661 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:14:45.661 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:14:45.661 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.661 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:45.661 [2024-11-19 01:55:56.275902] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:45.920 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.920 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:14:45.920 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.920 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:45.920 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.920 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:14:45.920 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.920 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:45.920 Malloc0 00:14:45.920 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.920 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:14:45.920 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.920 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:45.920 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.920 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:45.920 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.920 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:45.920 [2024-11-19 01:55:56.311782] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:45.920 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.920 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=84978 00:14:45.920 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:45.920 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=84979 00:14:45.920 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:45.920 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=84980 00:14:45.920 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:45.920 01:55:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 84978 00:14:45.920 [2024-11-19 01:55:56.500172] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:45.920 [2024-11-19 01:55:56.500388] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:45.920 [2024-11-19 01:55:56.500614] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:47.297 Initializing NVMe Controllers 00:14:47.297 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:47.297 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:14:47.297 Initialization complete. Launching workers. 00:14:47.297 ======================================================== 00:14:47.297 Latency(us) 00:14:47.297 Device Information : IOPS MiB/s Average min max 00:14:47.297 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3595.00 14.04 277.76 212.42 604.65 00:14:47.297 ======================================================== 00:14:47.297 Total : 3595.00 14.04 277.76 212.42 604.65 00:14:47.297 00:14:47.297 Initializing NVMe Controllers 00:14:47.297 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:47.297 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:14:47.297 Initialization complete. Launching workers. 00:14:47.297 ======================================================== 00:14:47.297 Latency(us) 00:14:47.297 Device Information : IOPS MiB/s Average min max 00:14:47.297 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3594.00 14.04 277.85 210.58 708.34 00:14:47.297 ======================================================== 00:14:47.297 Total : 3594.00 14.04 277.85 210.58 708.34 00:14:47.297 00:14:47.297 Initializing NVMe Controllers 00:14:47.297 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:47.297 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:14:47.297 Initialization complete. Launching workers. 00:14:47.297 ======================================================== 00:14:47.297 Latency(us) 00:14:47.297 Device Information : IOPS MiB/s Average min max 00:14:47.297 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3589.94 14.02 278.26 225.38 462.32 00:14:47.297 ======================================================== 00:14:47.297 Total : 3589.94 14.02 278.26 225.38 462.32 00:14:47.297 00:14:47.297 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 84979 00:14:47.297 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 84980 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:47.298 rmmod nvme_tcp 00:14:47.298 rmmod nvme_fabrics 00:14:47.298 rmmod nvme_keyring 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 84954 ']' 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 84954 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 84954 ']' 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 84954 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84954 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:47.298 killing process with pid 84954 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84954' 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 84954 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 84954 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:47.298 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:47.557 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:47.557 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:47.557 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:47.557 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:47.557 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:47.557 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:47.557 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.557 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:47.557 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.557 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:14:47.557 00:14:47.557 real 0m2.765s 00:14:47.557 user 0m4.712s 00:14:47.557 sys 0m1.279s 00:14:47.557 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:47.557 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:47.557 ************************************ 00:14:47.557 END TEST nvmf_control_msg_list 00:14:47.557 ************************************ 00:14:47.557 01:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:14:47.557 01:55:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:47.557 01:55:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:47.557 01:55:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:47.557 ************************************ 00:14:47.557 START TEST nvmf_wait_for_buf 00:14:47.557 ************************************ 00:14:47.557 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:14:47.817 * Looking for test storage... 00:14:47.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:47.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.817 --rc genhtml_branch_coverage=1 00:14:47.817 --rc genhtml_function_coverage=1 00:14:47.817 --rc genhtml_legend=1 00:14:47.817 --rc geninfo_all_blocks=1 00:14:47.817 --rc geninfo_unexecuted_blocks=1 00:14:47.817 00:14:47.817 ' 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:47.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.817 --rc genhtml_branch_coverage=1 00:14:47.817 --rc genhtml_function_coverage=1 00:14:47.817 --rc genhtml_legend=1 00:14:47.817 --rc geninfo_all_blocks=1 00:14:47.817 --rc geninfo_unexecuted_blocks=1 00:14:47.817 00:14:47.817 ' 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:47.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.817 --rc genhtml_branch_coverage=1 00:14:47.817 --rc genhtml_function_coverage=1 00:14:47.817 --rc genhtml_legend=1 00:14:47.817 --rc geninfo_all_blocks=1 00:14:47.817 --rc geninfo_unexecuted_blocks=1 00:14:47.817 00:14:47.817 ' 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:47.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:47.817 --rc genhtml_branch_coverage=1 00:14:47.817 --rc genhtml_function_coverage=1 00:14:47.817 --rc genhtml_legend=1 00:14:47.817 --rc geninfo_all_blocks=1 00:14:47.817 --rc geninfo_unexecuted_blocks=1 00:14:47.817 00:14:47.817 ' 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:47.817 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:47.818 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:47.818 Cannot find device "nvmf_init_br" 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:47.818 Cannot find device "nvmf_init_br2" 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:47.818 Cannot find device "nvmf_tgt_br" 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:47.818 Cannot find device "nvmf_tgt_br2" 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:47.818 Cannot find device "nvmf_init_br" 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:47.818 Cannot find device "nvmf_init_br2" 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:47.818 Cannot find device "nvmf_tgt_br" 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:47.818 Cannot find device "nvmf_tgt_br2" 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:14:47.818 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:48.077 Cannot find device "nvmf_br" 00:14:48.077 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:14:48.077 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:48.077 Cannot find device "nvmf_init_if" 00:14:48.077 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:14:48.077 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:48.077 Cannot find device "nvmf_init_if2" 00:14:48.077 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:14:48.077 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:48.077 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:48.077 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:14:48.077 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:48.077 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:48.077 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:14:48.077 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:48.077 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:48.077 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:48.077 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:48.077 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:48.077 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:48.077 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:48.077 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:48.077 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:48.077 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:48.077 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:48.077 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:48.077 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:48.077 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:48.077 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:48.077 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:48.077 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:48.077 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:48.077 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:48.077 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:48.077 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:48.077 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:48.077 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:48.077 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:48.077 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:48.077 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:48.336 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:48.336 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:48.336 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:48.336 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:48.336 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:48.336 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:48.336 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:48.336 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:48.336 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:14:48.336 00:14:48.336 --- 10.0.0.3 ping statistics --- 00:14:48.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.336 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:14:48.336 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:48.336 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:48.336 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:14:48.336 00:14:48.336 --- 10.0.0.4 ping statistics --- 00:14:48.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.336 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:14:48.336 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:48.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:48.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:14:48.336 00:14:48.336 --- 10.0.0.1 ping statistics --- 00:14:48.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.336 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:14:48.336 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:48.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:48.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:14:48.336 00:14:48.336 --- 10.0.0.2 ping statistics --- 00:14:48.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.336 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:14:48.336 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:48.336 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:14:48.336 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:48.336 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:48.336 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:48.336 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:48.336 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:48.336 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:48.336 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:48.336 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:14:48.336 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:48.336 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:48.336 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:48.336 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=85212 00:14:48.336 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:14:48.336 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 85212 00:14:48.336 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 85212 ']' 00:14:48.336 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.336 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:48.336 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.336 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:48.336 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:48.336 [2024-11-19 01:55:58.820575] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:14:48.336 [2024-11-19 01:55:58.820671] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.595 [2024-11-19 01:55:58.972338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.595 [2024-11-19 01:55:58.994396] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:48.595 [2024-11-19 01:55:58.994462] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:48.595 [2024-11-19 01:55:58.994476] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:48.595 [2024-11-19 01:55:58.994486] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:48.595 [2024-11-19 01:55:58.994495] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:48.595 [2024-11-19 01:55:58.994868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.595 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:48.595 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:14:48.595 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:48.595 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:48.595 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:48.595 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:48.595 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:14:48.595 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:14:48.595 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:14:48.596 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.596 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:48.596 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.596 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:14:48.596 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.596 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:48.596 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.596 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:14:48.596 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.596 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:48.596 [2024-11-19 01:55:59.118202] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:48.596 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.596 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:14:48.596 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.596 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:48.596 Malloc0 00:14:48.596 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.596 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:14:48.596 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.596 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:48.596 [2024-11-19 01:55:59.158293] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:48.596 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.596 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:14:48.596 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.596 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:48.596 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.596 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:14:48.596 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.596 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:48.596 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.596 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:48.596 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.596 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:48.596 [2024-11-19 01:55:59.182396] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:48.596 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.596 01:55:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:48.853 [2024-11-19 01:55:59.391633] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:50.228 Initializing NVMe Controllers 00:14:50.228 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:50.228 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:14:50.228 Initialization complete. Launching workers. 00:14:50.228 ======================================================== 00:14:50.228 Latency(us) 00:14:50.228 Device Information : IOPS MiB/s Average min max 00:14:50.228 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 498.01 62.25 8032.31 6995.81 11991.98 00:14:50.228 ======================================================== 00:14:50.228 Total : 498.01 62.25 8032.31 6995.81 11991.98 00:14:50.228 00:14:50.228 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:14:50.228 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:14:50.228 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.228 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:50.228 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.228 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4750 00:14:50.228 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4750 -eq 0 ]] 00:14:50.228 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:50.228 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:14:50.228 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:50.228 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:14:50.228 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:50.228 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:14:50.228 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:50.228 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:50.228 rmmod nvme_tcp 00:14:50.228 rmmod nvme_fabrics 00:14:50.228 rmmod nvme_keyring 00:14:50.228 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:50.228 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:14:50.228 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:14:50.228 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 85212 ']' 00:14:50.228 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 85212 00:14:50.228 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 85212 ']' 00:14:50.228 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 85212 00:14:50.228 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:14:50.487 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:50.487 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85212 00:14:50.487 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:50.487 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:50.487 killing process with pid 85212 00:14:50.487 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85212' 00:14:50.487 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 85212 00:14:50.487 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 85212 00:14:50.487 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:50.487 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:50.487 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:50.487 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:14:50.487 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:14:50.487 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:50.487 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:14:50.487 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:50.487 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:50.487 01:56:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:50.487 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:50.487 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:50.487 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:50.487 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:50.487 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:50.487 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:50.487 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:50.487 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:50.746 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:50.747 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:50.747 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:50.747 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:50.747 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:50.747 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.747 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:50.747 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.747 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:14:50.747 00:14:50.747 real 0m3.145s 00:14:50.747 user 0m2.431s 00:14:50.747 sys 0m0.790s 00:14:50.747 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:50.747 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:50.747 ************************************ 00:14:50.747 END TEST nvmf_wait_for_buf 00:14:50.747 ************************************ 00:14:50.747 01:56:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:14:50.747 01:56:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:14:50.747 01:56:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:50.747 01:56:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:50.747 01:56:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:50.747 ************************************ 00:14:50.747 START TEST nvmf_fuzz 00:14:50.747 ************************************ 00:14:50.747 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:14:51.006 * Looking for test storage... 00:14:51.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:51.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.006 --rc genhtml_branch_coverage=1 00:14:51.006 --rc genhtml_function_coverage=1 00:14:51.006 --rc genhtml_legend=1 00:14:51.006 --rc geninfo_all_blocks=1 00:14:51.006 --rc geninfo_unexecuted_blocks=1 00:14:51.006 00:14:51.006 ' 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:51.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.006 --rc genhtml_branch_coverage=1 00:14:51.006 --rc genhtml_function_coverage=1 00:14:51.006 --rc genhtml_legend=1 00:14:51.006 --rc geninfo_all_blocks=1 00:14:51.006 --rc geninfo_unexecuted_blocks=1 00:14:51.006 00:14:51.006 ' 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:51.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.006 --rc genhtml_branch_coverage=1 00:14:51.006 --rc genhtml_function_coverage=1 00:14:51.006 --rc genhtml_legend=1 00:14:51.006 --rc geninfo_all_blocks=1 00:14:51.006 --rc geninfo_unexecuted_blocks=1 00:14:51.006 00:14:51.006 ' 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:51.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:51.006 --rc genhtml_branch_coverage=1 00:14:51.006 --rc genhtml_function_coverage=1 00:14:51.006 --rc genhtml_legend=1 00:14:51.006 --rc geninfo_all_blocks=1 00:14:51.006 --rc geninfo_unexecuted_blocks=1 00:14:51.006 00:14:51.006 ' 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:51.006 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:51.007 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:51.007 Cannot find device "nvmf_init_br" 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:51.007 Cannot find device "nvmf_init_br2" 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:51.007 Cannot find device "nvmf_tgt_br" 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # true 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:51.007 Cannot find device "nvmf_tgt_br2" 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # true 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:51.007 Cannot find device "nvmf_init_br" 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # true 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:51.007 Cannot find device "nvmf_init_br2" 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # true 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:51.007 Cannot find device "nvmf_tgt_br" 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # true 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:51.007 Cannot find device "nvmf_tgt_br2" 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # true 00:14:51.007 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:51.266 Cannot find device "nvmf_br" 00:14:51.266 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # true 00:14:51.266 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:51.266 Cannot find device "nvmf_init_if" 00:14:51.266 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # true 00:14:51.266 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:51.266 Cannot find device "nvmf_init_if2" 00:14:51.266 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # true 00:14:51.266 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:51.266 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:51.266 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # true 00:14:51.266 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:51.266 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:51.266 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # true 00:14:51.266 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:51.266 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:51.266 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:51.266 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:51.266 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:51.266 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:51.266 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:51.266 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:51.266 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:51.266 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:51.266 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:51.266 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:51.266 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:51.266 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:51.266 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:51.266 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:51.266 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:51.266 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:51.266 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:51.266 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:51.266 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:51.266 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:51.266 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:51.266 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:51.266 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:51.266 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:51.525 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:51.525 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:51.525 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:51.525 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:51.525 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:51.525 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:51.525 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:51.525 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:51.525 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:14:51.525 00:14:51.525 --- 10.0.0.3 ping statistics --- 00:14:51.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.525 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:14:51.525 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:51.525 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:51.525 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:14:51.525 00:14:51.525 --- 10.0.0.4 ping statistics --- 00:14:51.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.525 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:14:51.525 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:51.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:51.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:14:51.525 00:14:51.525 --- 10.0.0.1 ping statistics --- 00:14:51.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.525 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:14:51.525 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:51.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:51.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:14:51.525 00:14:51.525 --- 10.0.0.2 ping statistics --- 00:14:51.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.525 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:14:51.525 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:51.525 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@461 -- # return 0 00:14:51.525 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:51.525 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:51.525 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:51.525 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:51.525 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:51.525 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:51.525 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:51.525 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=85475 00:14:51.525 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:51.525 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:51.525 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 85475 00:14:51.525 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 85475 ']' 00:14:51.525 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.525 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:51.525 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.525 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:51.525 01:56:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:51.784 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:51.784 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:14:51.784 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:51.784 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.784 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:51.784 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.784 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:14:51.784 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.784 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:51.784 Malloc0 00:14:51.784 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.784 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:51.784 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.784 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:51.784 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.784 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:51.784 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.784 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:51.784 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.784 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:51.784 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.784 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:51.784 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.784 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' 00:14:51.784 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -N -a 00:14:52.043 Shutting down the fuzz application 00:14:52.043 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:14:52.302 Shutting down the fuzz application 00:14:52.302 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:52.302 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.302 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:52.302 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.302 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:14:52.302 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:14:52.302 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:52.302 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:14:52.302 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:52.302 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:14:52.302 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:52.302 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:52.302 rmmod nvme_tcp 00:14:52.302 rmmod nvme_fabrics 00:14:52.302 rmmod nvme_keyring 00:14:52.302 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:52.560 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:14:52.560 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:14:52.560 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 85475 ']' 00:14:52.560 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 85475 00:14:52.560 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 85475 ']' 00:14:52.560 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 85475 00:14:52.560 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:14:52.560 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:52.560 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85475 00:14:52.560 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:52.560 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:52.560 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85475' 00:14:52.560 killing process with pid 85475 00:14:52.560 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 85475 00:14:52.560 01:56:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 85475 00:14:52.560 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:52.560 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:52.560 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:52.560 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:14:52.560 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:14:52.560 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:52.560 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:14:52.560 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:52.560 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:52.560 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:52.560 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:52.560 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:52.560 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:52.560 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:52.560 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:52.819 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:52.819 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:52.819 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:52.819 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:52.819 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:52.819 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:52.819 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:52.819 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:52.819 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.819 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:52.819 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.819 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@300 -- # return 0 00:14:52.819 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:14:52.819 00:14:52.819 real 0m2.062s 00:14:52.819 user 0m1.671s 00:14:52.819 sys 0m0.661s 00:14:52.819 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:52.819 ************************************ 00:14:52.819 END TEST nvmf_fuzz 00:14:52.819 ************************************ 00:14:52.819 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:52.819 01:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:14:52.819 01:56:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:52.819 01:56:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:52.819 01:56:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:52.819 ************************************ 00:14:52.819 START TEST nvmf_multiconnection 00:14:52.819 ************************************ 00:14:52.819 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:14:53.079 * Looking for test storage... 00:14:53.079 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lcov --version 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:53.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.079 --rc genhtml_branch_coverage=1 00:14:53.079 --rc genhtml_function_coverage=1 00:14:53.079 --rc genhtml_legend=1 00:14:53.079 --rc geninfo_all_blocks=1 00:14:53.079 --rc geninfo_unexecuted_blocks=1 00:14:53.079 00:14:53.079 ' 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:53.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.079 --rc genhtml_branch_coverage=1 00:14:53.079 --rc genhtml_function_coverage=1 00:14:53.079 --rc genhtml_legend=1 00:14:53.079 --rc geninfo_all_blocks=1 00:14:53.079 --rc geninfo_unexecuted_blocks=1 00:14:53.079 00:14:53.079 ' 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:53.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.079 --rc genhtml_branch_coverage=1 00:14:53.079 --rc genhtml_function_coverage=1 00:14:53.079 --rc genhtml_legend=1 00:14:53.079 --rc geninfo_all_blocks=1 00:14:53.079 --rc geninfo_unexecuted_blocks=1 00:14:53.079 00:14:53.079 ' 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:53.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.079 --rc genhtml_branch_coverage=1 00:14:53.079 --rc genhtml_function_coverage=1 00:14:53.079 --rc genhtml_legend=1 00:14:53.079 --rc geninfo_all_blocks=1 00:14:53.079 --rc geninfo_unexecuted_blocks=1 00:14:53.079 00:14:53.079 ' 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:53.079 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:53.080 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:53.080 Cannot find device "nvmf_init_br" 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:53.080 Cannot find device "nvmf_init_br2" 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:53.080 Cannot find device "nvmf_tgt_br" 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # true 00:14:53.080 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:53.339 Cannot find device "nvmf_tgt_br2" 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # true 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:53.339 Cannot find device "nvmf_init_br" 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # true 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:53.339 Cannot find device "nvmf_init_br2" 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # true 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:53.339 Cannot find device "nvmf_tgt_br" 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # true 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:53.339 Cannot find device "nvmf_tgt_br2" 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # true 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:53.339 Cannot find device "nvmf_br" 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # true 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:53.339 Cannot find device "nvmf_init_if" 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # true 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:53.339 Cannot find device "nvmf_init_if2" 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # true 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:53.339 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # true 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:53.339 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # true 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:53.339 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:53.598 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:53.598 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:53.598 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:53.598 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:53.598 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:53.598 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:53.598 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:53.598 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:53.598 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:53.598 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:53.598 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:53.598 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:53.598 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:53.598 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:53.598 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:53.598 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:14:53.598 00:14:53.598 --- 10.0.0.3 ping statistics --- 00:14:53.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.598 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:14:53.598 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:53.598 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:53.598 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:14:53.598 00:14:53.598 --- 10.0.0.4 ping statistics --- 00:14:53.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.598 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:14:53.598 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:53.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:53.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:14:53.598 00:14:53.598 --- 10.0.0.1 ping statistics --- 00:14:53.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.598 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:14:53.598 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:53.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:53.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:14:53.598 00:14:53.598 --- 10.0.0.2 ping statistics --- 00:14:53.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.598 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:14:53.598 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:53.598 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@461 -- # return 0 00:14:53.598 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:53.598 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:53.598 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:53.598 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:53.598 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:53.599 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:53.599 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:53.599 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:14:53.599 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:53.599 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:53.599 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:53.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.599 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=85706 00:14:53.599 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:53.599 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 85706 00:14:53.599 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 85706 ']' 00:14:53.599 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.599 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:53.599 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.599 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:53.599 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:53.599 [2024-11-19 01:56:04.155099] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:14:53.599 [2024-11-19 01:56:04.155864] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.857 [2024-11-19 01:56:04.305150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:53.857 [2024-11-19 01:56:04.330477] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.857 [2024-11-19 01:56:04.330762] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.857 [2024-11-19 01:56:04.330787] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:53.857 [2024-11-19 01:56:04.330800] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:53.857 [2024-11-19 01:56:04.330808] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.857 [2024-11-19 01:56:04.331667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.857 [2024-11-19 01:56:04.331740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:53.857 [2024-11-19 01:56:04.332699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:53.857 [2024-11-19 01:56:04.332709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.857 [2024-11-19 01:56:04.365093] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:53.857 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:53.857 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:14:53.857 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:53.857 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:53.857 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:53.857 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:53.857 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:53.857 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.857 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:53.857 [2024-11-19 01:56:04.451727] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:53.857 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.857 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:14:53.857 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:53.857 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:53.857 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.857 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.115 Malloc1 00:14:54.115 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.115 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.116 [2024-11-19 01:56:04.518912] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.116 Malloc2 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.116 Malloc3 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.116 Malloc4 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.116 Malloc5 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.3 -s 4420 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.116 Malloc6 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.116 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.375 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.375 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:14:54.375 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.375 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.375 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.375 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.3 -s 4420 00:14:54.375 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.375 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.375 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.375 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:54.375 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:14:54.375 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.375 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.375 Malloc7 00:14:54.375 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.375 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:14:54.375 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.375 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.375 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.375 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:14:54.375 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.375 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.375 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.375 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.3 -s 4420 00:14:54.375 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.375 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.375 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.375 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:54.375 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:14:54.375 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.375 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.375 Malloc8 00:14:54.375 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.3 -s 4420 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.376 Malloc9 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.3 -s 4420 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.376 Malloc10 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.3 -s 4420 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.376 Malloc11 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.3 -s 4420 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.376 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:14:54.635 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.635 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:14:54.635 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:54.635 01:56:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid=7cdc77f7-6c10-48d3-83fa-703a290bdf89 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:14:54.635 01:56:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:14:54.635 01:56:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:14:54.635 01:56:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:54.635 01:56:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:54.635 01:56:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:14:56.536 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:56.536 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:56.536 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:14:56.794 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:56.794 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:56.794 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:14:56.794 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:56.794 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid=7cdc77f7-6c10-48d3-83fa-703a290bdf89 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.3 -s 4420 00:14:56.794 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:14:56.794 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:14:56.794 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:56.794 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:56.794 01:56:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:14:58.703 01:56:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:58.703 01:56:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:58.703 01:56:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:14:58.703 01:56:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:58.703 01:56:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:58.703 01:56:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:14:58.703 01:56:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:58.703 01:56:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid=7cdc77f7-6c10-48d3-83fa-703a290bdf89 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.3 -s 4420 00:14:58.960 01:56:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:14:58.960 01:56:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:14:58.960 01:56:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:58.960 01:56:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:58.960 01:56:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:15:00.862 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:00.862 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:15:00.862 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:00.862 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:00.862 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:00.862 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:15:00.862 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:00.862 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid=7cdc77f7-6c10-48d3-83fa-703a290bdf89 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.3 -s 4420 00:15:01.120 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:15:01.120 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:15:01.120 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:01.120 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:01.120 01:56:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:15:03.048 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:03.048 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:03.048 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:15:03.048 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:03.048 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:03.048 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:15:03.048 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:03.048 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid=7cdc77f7-6c10-48d3-83fa-703a290bdf89 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.3 -s 4420 00:15:03.306 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:15:03.306 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:15:03.306 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:03.306 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:03.306 01:56:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:15:05.211 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:05.211 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:05.211 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:15:05.211 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:05.211 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:05.211 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:15:05.211 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:05.211 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid=7cdc77f7-6c10-48d3-83fa-703a290bdf89 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.3 -s 4420 00:15:05.470 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:15:05.470 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:15:05.470 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:05.470 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:05.470 01:56:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:15:07.371 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:07.371 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:07.371 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:15:07.371 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:07.372 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:07.372 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:15:07.372 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:07.372 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid=7cdc77f7-6c10-48d3-83fa-703a290bdf89 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.3 -s 4420 00:15:07.630 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:15:07.630 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:15:07.630 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:07.630 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:07.630 01:56:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:15:09.531 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:09.531 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:09.531 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:15:09.531 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:09.531 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:09.531 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:15:09.531 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:09.531 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid=7cdc77f7-6c10-48d3-83fa-703a290bdf89 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.3 -s 4420 00:15:09.790 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:15:09.790 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:15:09.790 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:09.790 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:09.790 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:15:11.694 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:11.694 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:11.694 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:15:11.694 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:11.694 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:11.694 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:15:11.694 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:11.694 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid=7cdc77f7-6c10-48d3-83fa-703a290bdf89 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.3 -s 4420 00:15:11.954 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:15:11.954 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:15:11.954 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:11.954 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:11.954 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:15:13.859 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:13.859 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:13.859 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:15:13.859 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:13.859 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:13.859 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:15:13.859 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:13.859 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid=7cdc77f7-6c10-48d3-83fa-703a290bdf89 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.3 -s 4420 00:15:14.118 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:15:14.118 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:15:14.118 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:14.118 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:14.118 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:15:16.022 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:16.022 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:16.022 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:15:16.022 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:16.022 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:16.022 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:15:16.022 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:16.022 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid=7cdc77f7-6c10-48d3-83fa-703a290bdf89 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.3 -s 4420 00:15:16.281 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:15:16.281 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:15:16.281 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:16.281 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:16.281 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:15:18.221 01:56:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:18.221 01:56:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:18.221 01:56:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:15:18.221 01:56:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:18.221 01:56:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:18.221 01:56:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:15:18.221 01:56:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:15:18.505 [global] 00:15:18.505 thread=1 00:15:18.505 invalidate=1 00:15:18.505 rw=read 00:15:18.505 time_based=1 00:15:18.505 runtime=10 00:15:18.505 ioengine=libaio 00:15:18.505 direct=1 00:15:18.505 bs=262144 00:15:18.505 iodepth=64 00:15:18.505 norandommap=1 00:15:18.505 numjobs=1 00:15:18.505 00:15:18.505 [job0] 00:15:18.505 filename=/dev/nvme0n1 00:15:18.505 [job1] 00:15:18.505 filename=/dev/nvme10n1 00:15:18.505 [job2] 00:15:18.505 filename=/dev/nvme1n1 00:15:18.505 [job3] 00:15:18.505 filename=/dev/nvme2n1 00:15:18.505 [job4] 00:15:18.505 filename=/dev/nvme3n1 00:15:18.505 [job5] 00:15:18.505 filename=/dev/nvme4n1 00:15:18.505 [job6] 00:15:18.505 filename=/dev/nvme5n1 00:15:18.505 [job7] 00:15:18.505 filename=/dev/nvme6n1 00:15:18.505 [job8] 00:15:18.505 filename=/dev/nvme7n1 00:15:18.505 [job9] 00:15:18.505 filename=/dev/nvme8n1 00:15:18.505 [job10] 00:15:18.505 filename=/dev/nvme9n1 00:15:18.505 Could not set queue depth (nvme0n1) 00:15:18.505 Could not set queue depth (nvme10n1) 00:15:18.505 Could not set queue depth (nvme1n1) 00:15:18.505 Could not set queue depth (nvme2n1) 00:15:18.505 Could not set queue depth (nvme3n1) 00:15:18.505 Could not set queue depth (nvme4n1) 00:15:18.505 Could not set queue depth (nvme5n1) 00:15:18.505 Could not set queue depth (nvme6n1) 00:15:18.505 Could not set queue depth (nvme7n1) 00:15:18.505 Could not set queue depth (nvme8n1) 00:15:18.505 Could not set queue depth (nvme9n1) 00:15:18.505 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:18.505 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:18.505 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:18.505 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:18.505 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:18.505 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:18.505 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:18.505 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:18.505 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:18.505 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:18.505 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:18.505 fio-3.35 00:15:18.505 Starting 11 threads 00:15:30.714 00:15:30.714 job0: (groupid=0, jobs=1): err= 0: pid=86149: Tue Nov 19 01:56:39 2024 00:15:30.714 read: IOPS=257, BW=64.4MiB/s (67.5MB/s)(650MiB/10093msec) 00:15:30.714 slat (usec): min=21, max=170424, avg=3844.67, stdev=10349.09 00:15:30.714 clat (msec): min=17, max=393, avg=244.30, stdev=67.79 00:15:30.714 lat (msec): min=17, max=475, avg=248.14, stdev=68.75 00:15:30.714 clat percentiles (msec): 00:15:30.714 | 1.00th=[ 110], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 190], 00:15:30.714 | 30.00th=[ 194], 40.00th=[ 201], 50.00th=[ 209], 60.00th=[ 262], 00:15:30.714 | 70.00th=[ 296], 80.00th=[ 321], 90.00th=[ 342], 95.00th=[ 359], 00:15:30.714 | 99.00th=[ 380], 99.50th=[ 388], 99.90th=[ 393], 99.95th=[ 393], 00:15:30.714 | 99.99th=[ 393] 00:15:30.714 bw ( KiB/s): min=42496, max=85162, per=12.84%, avg=64946.15, stdev=16831.95, samples=20 00:15:30.714 iops : min= 166, max= 332, avg=253.50, stdev=65.72, samples=20 00:15:30.714 lat (msec) : 20=0.23%, 50=0.54%, 100=0.15%, 250=57.21%, 500=41.86% 00:15:30.714 cpu : usr=0.10%, sys=1.28%, ctx=548, majf=0, minf=4097 00:15:30.714 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:15:30.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:30.714 issued rwts: total=2599,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.714 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:30.714 job1: (groupid=0, jobs=1): err= 0: pid=86150: Tue Nov 19 01:56:39 2024 00:15:30.714 read: IOPS=229, BW=57.4MiB/s (60.2MB/s)(580MiB/10100msec) 00:15:30.714 slat (usec): min=20, max=135127, avg=4310.01, stdev=10756.21 00:15:30.714 clat (msec): min=79, max=417, avg=274.02, stdev=47.06 00:15:30.714 lat (msec): min=80, max=417, avg=278.33, stdev=47.39 00:15:30.714 clat percentiles (msec): 00:15:30.714 | 1.00th=[ 104], 5.00th=[ 194], 10.00th=[ 226], 20.00th=[ 243], 00:15:30.714 | 30.00th=[ 253], 40.00th=[ 262], 50.00th=[ 275], 60.00th=[ 296], 00:15:30.714 | 70.00th=[ 305], 80.00th=[ 313], 90.00th=[ 326], 95.00th=[ 334], 00:15:30.714 | 99.00th=[ 355], 99.50th=[ 376], 99.90th=[ 397], 99.95th=[ 397], 00:15:30.714 | 99.99th=[ 418] 00:15:30.714 bw ( KiB/s): min=49664, max=71680, per=11.41%, avg=57728.00, stdev=6818.55, samples=20 00:15:30.714 iops : min= 194, max= 280, avg=225.50, stdev=26.63, samples=20 00:15:30.714 lat (msec) : 100=0.91%, 250=26.62%, 500=72.48% 00:15:30.714 cpu : usr=0.17%, sys=1.00%, ctx=477, majf=0, minf=4097 00:15:30.714 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:15:30.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:30.714 issued rwts: total=2318,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.714 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:30.714 job2: (groupid=0, jobs=1): err= 0: pid=86151: Tue Nov 19 01:56:39 2024 00:15:30.714 read: IOPS=234, BW=58.5MiB/s (61.4MB/s)(592MiB/10109msec) 00:15:30.714 slat (usec): min=21, max=191235, avg=4104.46, stdev=12293.25 00:15:30.714 clat (usec): min=1616, max=459365, avg=268943.78, stdev=85145.33 00:15:30.714 lat (usec): min=1659, max=514829, avg=273048.24, stdev=86572.57 00:15:30.714 clat percentiles (msec): 00:15:30.714 | 1.00th=[ 16], 5.00th=[ 50], 10.00th=[ 133], 20.00th=[ 255], 00:15:30.714 | 30.00th=[ 259], 40.00th=[ 264], 50.00th=[ 271], 60.00th=[ 288], 00:15:30.714 | 70.00th=[ 317], 80.00th=[ 338], 90.00th=[ 359], 95.00th=[ 376], 00:15:30.714 | 99.00th=[ 393], 99.50th=[ 393], 99.90th=[ 405], 99.95th=[ 418], 00:15:30.714 | 99.99th=[ 460] 00:15:30.714 bw ( KiB/s): min=44032, max=123392, per=11.66%, avg=58956.80, stdev=17411.39, samples=20 00:15:30.714 iops : min= 172, max= 482, avg=230.30, stdev=68.01, samples=20 00:15:30.714 lat (msec) : 2=0.04%, 4=0.08%, 10=0.34%, 20=0.93%, 50=3.68% 00:15:30.714 lat (msec) : 100=4.35%, 250=6.25%, 500=84.33% 00:15:30.714 cpu : usr=0.15%, sys=1.13%, ctx=626, majf=0, minf=4097 00:15:30.714 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:15:30.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:30.714 issued rwts: total=2367,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.714 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:30.714 job3: (groupid=0, jobs=1): err= 0: pid=86152: Tue Nov 19 01:56:39 2024 00:15:30.715 read: IOPS=254, BW=63.6MiB/s (66.7MB/s)(641MiB/10078msec) 00:15:30.715 slat (usec): min=20, max=120680, avg=3894.60, stdev=10391.95 00:15:30.715 clat (msec): min=77, max=400, avg=247.49, stdev=69.25 00:15:30.715 lat (msec): min=87, max=414, avg=251.38, stdev=70.13 00:15:30.715 clat percentiles (msec): 00:15:30.715 | 1.00th=[ 109], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 190], 00:15:30.715 | 30.00th=[ 197], 40.00th=[ 203], 50.00th=[ 209], 60.00th=[ 262], 00:15:30.715 | 70.00th=[ 296], 80.00th=[ 326], 90.00th=[ 355], 95.00th=[ 368], 00:15:30.715 | 99.00th=[ 388], 99.50th=[ 388], 99.90th=[ 397], 99.95th=[ 401], 00:15:30.715 | 99.99th=[ 401] 00:15:30.715 bw ( KiB/s): min=45568, max=84480, per=12.66%, avg=64025.60, stdev=16377.66, samples=20 00:15:30.715 iops : min= 178, max= 330, avg=250.10, stdev=63.98, samples=20 00:15:30.715 lat (msec) : 100=0.66%, 250=58.07%, 500=41.26% 00:15:30.715 cpu : usr=0.14%, sys=1.17%, ctx=511, majf=0, minf=4097 00:15:30.715 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.5% 00:15:30.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:30.715 issued rwts: total=2564,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.715 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:30.715 job4: (groupid=0, jobs=1): err= 0: pid=86153: Tue Nov 19 01:56:39 2024 00:15:30.715 read: IOPS=233, BW=58.4MiB/s (61.2MB/s)(590MiB/10106msec) 00:15:30.715 slat (usec): min=21, max=116026, avg=4236.57, stdev=10819.37 00:15:30.715 clat (msec): min=92, max=446, avg=269.53, stdev=50.18 00:15:30.715 lat (msec): min=95, max=446, avg=273.77, stdev=50.71 00:15:30.715 clat percentiles (msec): 00:15:30.715 | 1.00th=[ 121], 5.00th=[ 171], 10.00th=[ 207], 20.00th=[ 234], 00:15:30.715 | 30.00th=[ 249], 40.00th=[ 262], 50.00th=[ 275], 60.00th=[ 292], 00:15:30.715 | 70.00th=[ 300], 80.00th=[ 313], 90.00th=[ 326], 95.00th=[ 334], 00:15:30.715 | 99.00th=[ 351], 99.50th=[ 359], 99.90th=[ 397], 99.95th=[ 426], 00:15:30.715 | 99.99th=[ 447] 00:15:30.715 bw ( KiB/s): min=48640, max=79360, per=11.62%, avg=58791.90, stdev=8221.31, samples=20 00:15:30.715 iops : min= 190, max= 310, avg=229.55, stdev=32.16, samples=20 00:15:30.715 lat (msec) : 100=0.25%, 250=31.78%, 500=67.97% 00:15:30.715 cpu : usr=0.18%, sys=1.05%, ctx=460, majf=0, minf=4097 00:15:30.715 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:15:30.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:30.715 issued rwts: total=2360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.715 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:30.715 job5: (groupid=0, jobs=1): err= 0: pid=86154: Tue Nov 19 01:56:39 2024 00:15:30.715 read: IOPS=94, BW=23.7MiB/s (24.8MB/s)(240MiB/10141msec) 00:15:30.715 slat (usec): min=20, max=263430, avg=10417.76, stdev=28878.82 00:15:30.715 clat (msec): min=80, max=893, avg=664.41, stdev=160.75 00:15:30.715 lat (msec): min=81, max=912, avg=674.83, stdev=161.98 00:15:30.715 clat percentiles (msec): 00:15:30.715 | 1.00th=[ 87], 5.00th=[ 284], 10.00th=[ 527], 20.00th=[ 584], 00:15:30.715 | 30.00th=[ 617], 40.00th=[ 667], 50.00th=[ 701], 60.00th=[ 735], 00:15:30.715 | 70.00th=[ 751], 80.00th=[ 785], 90.00th=[ 818], 95.00th=[ 835], 00:15:30.715 | 99.00th=[ 860], 99.50th=[ 885], 99.90th=[ 894], 99.95th=[ 894], 00:15:30.715 | 99.99th=[ 894] 00:15:30.715 bw ( KiB/s): min=12288, max=32256, per=4.54%, avg=22965.30, stdev=5839.97, samples=20 00:15:30.715 iops : min= 48, max= 126, avg=89.70, stdev=22.82, samples=20 00:15:30.715 lat (msec) : 100=1.35%, 250=2.71%, 500=4.48%, 750=60.10%, 1000=31.35% 00:15:30.715 cpu : usr=0.05%, sys=0.47%, ctx=184, majf=0, minf=4097 00:15:30.715 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.3%, >=64=93.4% 00:15:30.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.715 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:30.715 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.715 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:30.715 job6: (groupid=0, jobs=1): err= 0: pid=86155: Tue Nov 19 01:56:39 2024 00:15:30.715 read: IOPS=94, BW=23.5MiB/s (24.7MB/s)(239MiB/10147msec) 00:15:30.715 slat (usec): min=22, max=371110, avg=10481.90, stdev=31139.98 00:15:30.715 clat (msec): min=21, max=947, avg=668.64, stdev=158.76 00:15:30.715 lat (msec): min=21, max=952, avg=679.12, stdev=159.89 00:15:30.715 clat percentiles (msec): 00:15:30.715 | 1.00th=[ 43], 5.00th=[ 376], 10.00th=[ 493], 20.00th=[ 600], 00:15:30.715 | 30.00th=[ 651], 40.00th=[ 667], 50.00th=[ 684], 60.00th=[ 701], 00:15:30.715 | 70.00th=[ 743], 80.00th=[ 785], 90.00th=[ 827], 95.00th=[ 860], 00:15:30.715 | 99.00th=[ 927], 99.50th=[ 927], 99.90th=[ 953], 99.95th=[ 953], 00:15:30.715 | 99.99th=[ 953] 00:15:30.715 bw ( KiB/s): min= 9728, max=31232, per=4.51%, avg=22815.25, stdev=5359.45, samples=20 00:15:30.715 iops : min= 38, max= 122, avg=89.05, stdev=20.99, samples=20 00:15:30.715 lat (msec) : 50=1.36%, 100=1.36%, 250=1.26%, 500=6.07%, 750=61.36% 00:15:30.715 lat (msec) : 1000=28.59% 00:15:30.715 cpu : usr=0.05%, sys=0.50%, ctx=189, majf=0, minf=4097 00:15:30.715 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.4%, >=64=93.4% 00:15:30.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.715 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:30.715 issued rwts: total=955,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.715 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:30.715 job7: (groupid=0, jobs=1): err= 0: pid=86156: Tue Nov 19 01:56:39 2024 00:15:30.715 read: IOPS=168, BW=42.1MiB/s (44.1MB/s)(426MiB/10114msec) 00:15:30.715 slat (usec): min=25, max=265470, avg=5503.51, stdev=20510.92 00:15:30.715 clat (msec): min=5, max=1014, avg=374.09, stdev=238.23 00:15:30.715 lat (msec): min=6, max=1014, avg=379.59, stdev=241.77 00:15:30.715 clat percentiles (msec): 00:15:30.715 | 1.00th=[ 23], 5.00th=[ 69], 10.00th=[ 163], 20.00th=[ 253], 00:15:30.715 | 30.00th=[ 257], 40.00th=[ 262], 50.00th=[ 264], 60.00th=[ 268], 00:15:30.715 | 70.00th=[ 321], 80.00th=[ 642], 90.00th=[ 802], 95.00th=[ 860], 00:15:30.715 | 99.00th=[ 927], 99.50th=[ 927], 99.90th=[ 1011], 99.95th=[ 1011], 00:15:30.715 | 99.99th=[ 1011] 00:15:30.715 bw ( KiB/s): min=12800, max=69632, per=8.30%, avg=42005.90, stdev=21458.87, samples=20 00:15:30.715 iops : min= 50, max= 272, avg=163.95, stdev=83.76, samples=20 00:15:30.715 lat (msec) : 10=0.41%, 20=0.53%, 50=2.94%, 100=2.11%, 250=9.57% 00:15:30.715 lat (msec) : 500=57.49%, 750=13.98%, 1000=12.80%, 2000=0.18% 00:15:30.715 cpu : usr=0.13%, sys=0.86%, ctx=516, majf=0, minf=4098 00:15:30.715 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:15:30.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.715 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:30.715 issued rwts: total=1703,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.715 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:30.715 job8: (groupid=0, jobs=1): err= 0: pid=86157: Tue Nov 19 01:56:39 2024 00:15:30.715 read: IOPS=94, BW=23.5MiB/s (24.6MB/s)(238MiB/10137msec) 00:15:30.715 slat (usec): min=21, max=221905, avg=10492.28, stdev=30340.38 00:15:30.715 clat (msec): min=132, max=970, avg=669.58, stdev=144.64 00:15:30.715 lat (msec): min=199, max=970, avg=680.07, stdev=145.66 00:15:30.715 clat percentiles (msec): 00:15:30.715 | 1.00th=[ 213], 5.00th=[ 266], 10.00th=[ 550], 20.00th=[ 609], 00:15:30.715 | 30.00th=[ 642], 40.00th=[ 659], 50.00th=[ 684], 60.00th=[ 709], 00:15:30.715 | 70.00th=[ 735], 80.00th=[ 776], 90.00th=[ 827], 95.00th=[ 860], 00:15:30.715 | 99.00th=[ 902], 99.50th=[ 944], 99.90th=[ 969], 99.95th=[ 969], 00:15:30.715 | 99.99th=[ 969] 00:15:30.715 bw ( KiB/s): min=14336, max=30720, per=4.50%, avg=22786.25, stdev=5288.24, samples=20 00:15:30.715 iops : min= 56, max= 120, avg=89.00, stdev=20.66, samples=20 00:15:30.715 lat (msec) : 250=3.88%, 500=3.88%, 750=65.58%, 1000=26.65% 00:15:30.715 cpu : usr=0.05%, sys=0.45%, ctx=163, majf=0, minf=4097 00:15:30.715 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.4%, >=64=93.4% 00:15:30.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.715 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:30.715 issued rwts: total=953,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.715 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:30.715 job9: (groupid=0, jobs=1): err= 0: pid=86158: Tue Nov 19 01:56:39 2024 00:15:30.715 read: IOPS=90, BW=22.6MiB/s (23.7MB/s)(229MiB/10138msec) 00:15:30.715 slat (usec): min=20, max=283285, avg=10911.49, stdev=32917.51 00:15:30.715 clat (msec): min=21, max=1068, avg=695.54, stdev=163.89 00:15:30.715 lat (msec): min=22, max=1068, avg=706.45, stdev=164.35 00:15:30.715 clat percentiles (msec): 00:15:30.715 | 1.00th=[ 305], 5.00th=[ 351], 10.00th=[ 518], 20.00th=[ 575], 00:15:30.715 | 30.00th=[ 625], 40.00th=[ 659], 50.00th=[ 693], 60.00th=[ 743], 00:15:30.715 | 70.00th=[ 785], 80.00th=[ 835], 90.00th=[ 902], 95.00th=[ 953], 00:15:30.715 | 99.00th=[ 1020], 99.50th=[ 1070], 99.90th=[ 1070], 99.95th=[ 1070], 00:15:30.715 | 99.99th=[ 1070] 00:15:30.715 bw ( KiB/s): min=11264, max=30720, per=4.32%, avg=21853.55, stdev=6816.81, samples=20 00:15:30.715 iops : min= 44, max= 120, avg=85.25, stdev=26.56, samples=20 00:15:30.715 lat (msec) : 50=0.11%, 250=0.44%, 500=7.63%, 750=53.11%, 1000=36.53% 00:15:30.715 lat (msec) : 2000=2.18% 00:15:30.715 cpu : usr=0.06%, sys=0.41%, ctx=171, majf=0, minf=4097 00:15:30.715 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.5%, >=64=93.1% 00:15:30.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.715 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:30.715 issued rwts: total=917,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.715 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:30.715 job10: (groupid=0, jobs=1): err= 0: pid=86159: Tue Nov 19 01:56:39 2024 00:15:30.715 read: IOPS=232, BW=58.2MiB/s (61.0MB/s)(588MiB/10106msec) 00:15:30.715 slat (usec): min=21, max=95487, avg=4247.89, stdev=10470.59 00:15:30.715 clat (msec): min=43, max=436, avg=270.28, stdev=55.62 00:15:30.715 lat (msec): min=43, max=436, avg=274.52, stdev=56.08 00:15:30.715 clat percentiles (msec): 00:15:30.715 | 1.00th=[ 61], 5.00th=[ 163], 10.00th=[ 201], 20.00th=[ 232], 00:15:30.715 | 30.00th=[ 251], 40.00th=[ 268], 50.00th=[ 279], 60.00th=[ 292], 00:15:30.715 | 70.00th=[ 305], 80.00th=[ 313], 90.00th=[ 330], 95.00th=[ 338], 00:15:30.716 | 99.00th=[ 355], 99.50th=[ 376], 99.90th=[ 439], 99.95th=[ 439], 00:15:30.716 | 99.99th=[ 439] 00:15:30.716 bw ( KiB/s): min=47711, max=78848, per=11.59%, avg=58613.45, stdev=8590.23, samples=20 00:15:30.716 iops : min= 186, max= 308, avg=228.85, stdev=33.58, samples=20 00:15:30.716 lat (msec) : 50=0.59%, 100=0.85%, 250=27.41%, 500=71.14% 00:15:30.716 cpu : usr=0.11%, sys=1.14%, ctx=474, majf=0, minf=4097 00:15:30.716 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:15:30.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:30.716 issued rwts: total=2353,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.716 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:30.716 00:15:30.716 Run status group 0 (all jobs): 00:15:30.716 READ: bw=494MiB/s (518MB/s), 22.6MiB/s-64.4MiB/s (23.7MB/s-67.5MB/s), io=5012MiB (5256MB), run=10078-10147msec 00:15:30.716 00:15:30.716 Disk stats (read/write): 00:15:30.716 nvme0n1: ios=5071/0, merge=0/0, ticks=1232698/0, in_queue=1232698, util=97.78% 00:15:30.716 nvme10n1: ios=4508/0, merge=0/0, ticks=1223661/0, in_queue=1223661, util=97.83% 00:15:30.716 nvme1n1: ios=4613/0, merge=0/0, ticks=1217091/0, in_queue=1217091, util=98.09% 00:15:30.716 nvme2n1: ios=4997/0, merge=0/0, ticks=1231432/0, in_queue=1231432, util=98.08% 00:15:30.716 nvme3n1: ios=4599/0, merge=0/0, ticks=1226288/0, in_queue=1226288, util=98.37% 00:15:30.716 nvme4n1: ios=1792/0, merge=0/0, ticks=1212232/0, in_queue=1212232, util=98.34% 00:15:30.716 nvme5n1: ios=1785/0, merge=0/0, ticks=1202698/0, in_queue=1202698, util=98.69% 00:15:30.716 nvme6n1: ios=3288/0, merge=0/0, ticks=1220213/0, in_queue=1220213, util=98.77% 00:15:30.716 nvme7n1: ios=1778/0, merge=0/0, ticks=1201898/0, in_queue=1201898, util=98.85% 00:15:30.716 nvme8n1: ios=1711/0, merge=0/0, ticks=1194019/0, in_queue=1194019, util=99.04% 00:15:30.716 nvme9n1: ios=4583/0, merge=0/0, ticks=1226449/0, in_queue=1226449, util=99.21% 00:15:30.716 01:56:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:15:30.716 [global] 00:15:30.716 thread=1 00:15:30.716 invalidate=1 00:15:30.716 rw=randwrite 00:15:30.716 time_based=1 00:15:30.716 runtime=10 00:15:30.716 ioengine=libaio 00:15:30.716 direct=1 00:15:30.716 bs=262144 00:15:30.716 iodepth=64 00:15:30.716 norandommap=1 00:15:30.716 numjobs=1 00:15:30.716 00:15:30.716 [job0] 00:15:30.716 filename=/dev/nvme0n1 00:15:30.716 [job1] 00:15:30.716 filename=/dev/nvme10n1 00:15:30.716 [job2] 00:15:30.716 filename=/dev/nvme1n1 00:15:30.716 [job3] 00:15:30.716 filename=/dev/nvme2n1 00:15:30.716 [job4] 00:15:30.716 filename=/dev/nvme3n1 00:15:30.716 [job5] 00:15:30.716 filename=/dev/nvme4n1 00:15:30.716 [job6] 00:15:30.716 filename=/dev/nvme5n1 00:15:30.716 [job7] 00:15:30.716 filename=/dev/nvme6n1 00:15:30.716 [job8] 00:15:30.716 filename=/dev/nvme7n1 00:15:30.716 [job9] 00:15:30.716 filename=/dev/nvme8n1 00:15:30.716 [job10] 00:15:30.716 filename=/dev/nvme9n1 00:15:30.716 Could not set queue depth (nvme0n1) 00:15:30.716 Could not set queue depth (nvme10n1) 00:15:30.716 Could not set queue depth (nvme1n1) 00:15:30.716 Could not set queue depth (nvme2n1) 00:15:30.716 Could not set queue depth (nvme3n1) 00:15:30.716 Could not set queue depth (nvme4n1) 00:15:30.716 Could not set queue depth (nvme5n1) 00:15:30.716 Could not set queue depth (nvme6n1) 00:15:30.716 Could not set queue depth (nvme7n1) 00:15:30.716 Could not set queue depth (nvme8n1) 00:15:30.716 Could not set queue depth (nvme9n1) 00:15:30.716 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:30.716 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:30.716 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:30.716 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:30.716 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:30.716 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:30.716 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:30.716 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:30.716 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:30.716 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:30.716 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:15:30.716 fio-3.35 00:15:30.716 Starting 11 threads 00:15:40.696 00:15:40.696 job0: (groupid=0, jobs=1): err= 0: pid=86365: Tue Nov 19 01:56:50 2024 00:15:40.696 write: IOPS=241, BW=60.3MiB/s (63.2MB/s)(614MiB/10180msec); 0 zone resets 00:15:40.696 slat (usec): min=19, max=57940, avg=3980.96, stdev=7180.04 00:15:40.696 clat (msec): min=15, max=447, avg=261.29, stdev=39.20 00:15:40.696 lat (msec): min=15, max=447, avg=265.27, stdev=39.38 00:15:40.696 clat percentiles (msec): 00:15:40.696 | 1.00th=[ 72], 5.00th=[ 205], 10.00th=[ 224], 20.00th=[ 255], 00:15:40.696 | 30.00th=[ 262], 40.00th=[ 271], 50.00th=[ 271], 60.00th=[ 275], 00:15:40.696 | 70.00th=[ 275], 80.00th=[ 279], 90.00th=[ 279], 95.00th=[ 284], 00:15:40.696 | 99.00th=[ 347], 99.50th=[ 397], 99.90th=[ 430], 99.95th=[ 447], 00:15:40.696 | 99.99th=[ 447] 00:15:40.696 bw ( KiB/s): min=57344, max=81408, per=8.20%, avg=61235.20, stdev=5597.35, samples=20 00:15:40.696 iops : min= 224, max= 318, avg=239.20, stdev=21.86, samples=20 00:15:40.696 lat (msec) : 20=0.08%, 50=0.49%, 100=1.18%, 250=11.28%, 500=86.97% 00:15:40.696 cpu : usr=0.49%, sys=0.76%, ctx=2645, majf=0, minf=1 00:15:40.696 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:15:40.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.696 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:40.696 issued rwts: total=0,2455,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:40.696 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:40.696 job1: (groupid=0, jobs=1): err= 0: pid=86366: Tue Nov 19 01:56:50 2024 00:15:40.696 write: IOPS=533, BW=133MiB/s (140MB/s)(1346MiB/10094msec); 0 zone resets 00:15:40.696 slat (usec): min=17, max=24820, avg=1851.33, stdev=3296.22 00:15:40.696 clat (msec): min=27, max=278, avg=118.08, stdev=28.17 00:15:40.696 lat (msec): min=27, max=278, avg=119.93, stdev=28.40 00:15:40.696 clat percentiles (msec): 00:15:40.696 | 1.00th=[ 100], 5.00th=[ 106], 10.00th=[ 107], 20.00th=[ 109], 00:15:40.697 | 30.00th=[ 112], 40.00th=[ 113], 50.00th=[ 114], 60.00th=[ 114], 00:15:40.697 | 70.00th=[ 115], 80.00th=[ 115], 90.00th=[ 117], 95.00th=[ 159], 00:15:40.697 | 99.00th=[ 266], 99.50th=[ 271], 99.90th=[ 275], 99.95th=[ 279], 00:15:40.697 | 99.99th=[ 279] 00:15:40.697 bw ( KiB/s): min=61440, max=147456, per=18.24%, avg=136243.20, stdev=24273.73, samples=20 00:15:40.697 iops : min= 240, max= 576, avg=532.20, stdev=94.82, samples=20 00:15:40.697 lat (msec) : 50=0.15%, 100=0.97%, 250=96.68%, 500=2.21% 00:15:40.697 cpu : usr=0.86%, sys=1.76%, ctx=6260, majf=0, minf=1 00:15:40.697 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:40.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.697 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:40.697 issued rwts: total=0,5385,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:40.697 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:40.697 job2: (groupid=0, jobs=1): err= 0: pid=86378: Tue Nov 19 01:56:50 2024 00:15:40.697 write: IOPS=180, BW=45.2MiB/s (47.4MB/s)(464MiB/10257msec); 0 zone resets 00:15:40.697 slat (usec): min=17, max=118028, avg=5390.33, stdev=10030.43 00:15:40.697 clat (msec): min=4, max=586, avg=348.06, stdev=48.83 00:15:40.697 lat (msec): min=5, max=586, avg=353.45, stdev=48.67 00:15:40.697 clat percentiles (msec): 00:15:40.697 | 1.00th=[ 159], 5.00th=[ 279], 10.00th=[ 300], 20.00th=[ 334], 00:15:40.697 | 30.00th=[ 342], 40.00th=[ 351], 50.00th=[ 355], 60.00th=[ 359], 00:15:40.697 | 70.00th=[ 363], 80.00th=[ 372], 90.00th=[ 388], 95.00th=[ 397], 00:15:40.697 | 99.00th=[ 477], 99.50th=[ 542], 99.90th=[ 584], 99.95th=[ 584], 00:15:40.697 | 99.99th=[ 584] 00:15:40.697 bw ( KiB/s): min=40960, max=53248, per=6.14%, avg=45880.30, stdev=3010.72, samples=20 00:15:40.697 iops : min= 160, max= 208, avg=179.20, stdev=11.72, samples=20 00:15:40.697 lat (msec) : 10=0.22%, 20=0.22%, 50=0.22%, 250=2.16%, 500=96.44% 00:15:40.697 lat (msec) : 750=0.75% 00:15:40.697 cpu : usr=0.30%, sys=0.66%, ctx=1962, majf=0, minf=1 00:15:40.697 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:15:40.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.697 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:40.697 issued rwts: total=0,1856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:40.697 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:40.697 job3: (groupid=0, jobs=1): err= 0: pid=86379: Tue Nov 19 01:56:50 2024 00:15:40.697 write: IOPS=242, BW=60.7MiB/s (63.6MB/s)(618MiB/10186msec); 0 zone resets 00:15:40.697 slat (usec): min=16, max=21523, avg=4038.83, stdev=7109.48 00:15:40.697 clat (msec): min=17, max=454, avg=259.42, stdev=41.88 00:15:40.697 lat (msec): min=17, max=454, avg=263.45, stdev=42.01 00:15:40.697 clat percentiles (msec): 00:15:40.697 | 1.00th=[ 78], 5.00th=[ 182], 10.00th=[ 213], 20.00th=[ 255], 00:15:40.697 | 30.00th=[ 262], 40.00th=[ 271], 50.00th=[ 271], 60.00th=[ 275], 00:15:40.697 | 70.00th=[ 275], 80.00th=[ 279], 90.00th=[ 279], 95.00th=[ 284], 00:15:40.697 | 99.00th=[ 351], 99.50th=[ 401], 99.90th=[ 439], 99.95th=[ 456], 00:15:40.697 | 99.99th=[ 456] 00:15:40.697 bw ( KiB/s): min=57344, max=92160, per=8.26%, avg=61683.90, stdev=7619.07, samples=20 00:15:40.697 iops : min= 224, max= 360, avg=240.90, stdev=29.77, samples=20 00:15:40.697 lat (msec) : 20=0.16%, 50=0.32%, 100=1.13%, 250=12.09%, 500=86.29% 00:15:40.697 cpu : usr=0.39%, sys=0.81%, ctx=2296, majf=0, minf=1 00:15:40.697 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:15:40.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.697 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:40.697 issued rwts: total=0,2473,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:40.697 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:40.697 job4: (groupid=0, jobs=1): err= 0: pid=86380: Tue Nov 19 01:56:50 2024 00:15:40.697 write: IOPS=240, BW=60.1MiB/s (63.0MB/s)(613MiB/10191msec); 0 zone resets 00:15:40.697 slat (usec): min=19, max=49923, avg=4077.48, stdev=7208.58 00:15:40.697 clat (msec): min=17, max=454, avg=261.90, stdev=38.02 00:15:40.697 lat (msec): min=17, max=454, avg=265.97, stdev=38.01 00:15:40.697 clat percentiles (msec): 00:15:40.697 | 1.00th=[ 79], 5.00th=[ 207], 10.00th=[ 228], 20.00th=[ 257], 00:15:40.697 | 30.00th=[ 262], 40.00th=[ 271], 50.00th=[ 271], 60.00th=[ 275], 00:15:40.697 | 70.00th=[ 275], 80.00th=[ 279], 90.00th=[ 279], 95.00th=[ 284], 00:15:40.697 | 99.00th=[ 351], 99.50th=[ 401], 99.90th=[ 439], 99.95th=[ 456], 00:15:40.697 | 99.99th=[ 456] 00:15:40.697 bw ( KiB/s): min=57344, max=79360, per=8.18%, avg=61126.65, stdev=5163.69, samples=20 00:15:40.697 iops : min= 224, max= 310, avg=238.75, stdev=20.17, samples=20 00:15:40.697 lat (msec) : 20=0.12%, 50=0.33%, 100=1.14%, 250=11.46%, 500=86.94% 00:15:40.697 cpu : usr=0.53%, sys=0.64%, ctx=1614, majf=0, minf=1 00:15:40.697 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:15:40.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.697 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:40.697 issued rwts: total=0,2451,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:40.697 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:40.697 job5: (groupid=0, jobs=1): err= 0: pid=86381: Tue Nov 19 01:56:50 2024 00:15:40.697 write: IOPS=532, BW=133MiB/s (140MB/s)(1344MiB/10089msec); 0 zone resets 00:15:40.697 slat (usec): min=16, max=75843, avg=1824.32, stdev=3373.16 00:15:40.697 clat (msec): min=77, max=279, avg=118.29, stdev=28.44 00:15:40.697 lat (msec): min=77, max=279, avg=120.12, stdev=28.52 00:15:40.697 clat percentiles (msec): 00:15:40.697 | 1.00th=[ 101], 5.00th=[ 106], 10.00th=[ 107], 20.00th=[ 109], 00:15:40.697 | 30.00th=[ 112], 40.00th=[ 113], 50.00th=[ 114], 60.00th=[ 114], 00:15:40.697 | 70.00th=[ 115], 80.00th=[ 115], 90.00th=[ 117], 95.00th=[ 159], 00:15:40.697 | 99.00th=[ 266], 99.50th=[ 271], 99.90th=[ 275], 99.95th=[ 279], 00:15:40.697 | 99.99th=[ 279] 00:15:40.697 bw ( KiB/s): min=61050, max=147456, per=18.20%, avg=135953.45, stdev=25602.59, samples=20 00:15:40.697 iops : min= 238, max= 576, avg=531.00, stdev=100.07, samples=20 00:15:40.697 lat (msec) : 100=0.95%, 250=96.69%, 500=2.36% 00:15:40.697 cpu : usr=0.98%, sys=1.60%, ctx=6610, majf=0, minf=1 00:15:40.697 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:40.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.697 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:40.697 issued rwts: total=0,5374,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:40.697 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:40.697 job6: (groupid=0, jobs=1): err= 0: pid=86382: Tue Nov 19 01:56:50 2024 00:15:40.697 write: IOPS=239, BW=60.0MiB/s (62.9MB/s)(611MiB/10184msec); 0 zone resets 00:15:40.697 slat (usec): min=17, max=21463, avg=3966.84, stdev=7139.47 00:15:40.697 clat (msec): min=21, max=443, avg=262.59, stdev=36.42 00:15:40.697 lat (msec): min=21, max=443, avg=266.55, stdev=36.62 00:15:40.697 clat percentiles (msec): 00:15:40.697 | 1.00th=[ 88], 5.00th=[ 201], 10.00th=[ 247], 20.00th=[ 257], 00:15:40.697 | 30.00th=[ 262], 40.00th=[ 271], 50.00th=[ 271], 60.00th=[ 275], 00:15:40.697 | 70.00th=[ 275], 80.00th=[ 279], 90.00th=[ 279], 95.00th=[ 284], 00:15:40.697 | 99.00th=[ 342], 99.50th=[ 393], 99.90th=[ 426], 99.95th=[ 443], 00:15:40.697 | 99.99th=[ 443] 00:15:40.697 bw ( KiB/s): min=57344, max=70144, per=8.16%, avg=60947.65, stdev=3335.60, samples=20 00:15:40.697 iops : min= 224, max= 274, avg=238.05, stdev=13.04, samples=20 00:15:40.697 lat (msec) : 50=0.49%, 100=0.70%, 250=9.53%, 500=89.28% 00:15:40.697 cpu : usr=0.47%, sys=0.75%, ctx=3250, majf=0, minf=1 00:15:40.697 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:15:40.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.697 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:40.697 issued rwts: total=0,2444,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:40.697 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:40.697 job7: (groupid=0, jobs=1): err= 0: pid=86383: Tue Nov 19 01:56:50 2024 00:15:40.697 write: IOPS=179, BW=44.8MiB/s (47.0MB/s)(460MiB/10262msec); 0 zone resets 00:15:40.697 slat (usec): min=17, max=161168, avg=5444.01, stdev=10380.46 00:15:40.697 clat (msec): min=5, max=595, avg=351.61, stdev=49.94 00:15:40.697 lat (msec): min=6, max=595, avg=357.06, stdev=49.68 00:15:40.698 clat percentiles (msec): 00:15:40.698 | 1.00th=[ 56], 5.00th=[ 296], 10.00th=[ 321], 20.00th=[ 338], 00:15:40.698 | 30.00th=[ 347], 40.00th=[ 351], 50.00th=[ 355], 60.00th=[ 359], 00:15:40.698 | 70.00th=[ 368], 80.00th=[ 372], 90.00th=[ 384], 95.00th=[ 401], 00:15:40.698 | 99.00th=[ 481], 99.50th=[ 550], 99.90th=[ 592], 99.95th=[ 592], 00:15:40.698 | 99.99th=[ 592] 00:15:40.698 bw ( KiB/s): min=41042, max=49250, per=6.09%, avg=45477.40, stdev=1964.65, samples=20 00:15:40.698 iops : min= 160, max= 192, avg=177.40, stdev= 7.71, samples=20 00:15:40.698 lat (msec) : 10=0.11%, 20=0.44%, 50=0.44%, 100=0.22%, 250=0.87% 00:15:40.698 lat (msec) : 500=96.95%, 750=0.98% 00:15:40.698 cpu : usr=0.37%, sys=0.54%, ctx=1945, majf=0, minf=1 00:15:40.698 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:15:40.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.698 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:40.698 issued rwts: total=0,1838,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:40.698 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:40.698 job8: (groupid=0, jobs=1): err= 0: pid=86384: Tue Nov 19 01:56:50 2024 00:15:40.698 write: IOPS=174, BW=43.6MiB/s (45.7MB/s)(446MiB/10246msec); 0 zone resets 00:15:40.698 slat (usec): min=15, max=262991, avg=5434.59, stdev=11726.55 00:15:40.698 clat (msec): min=15, max=664, avg=361.73, stdev=78.76 00:15:40.698 lat (msec): min=15, max=664, avg=367.16, stdev=79.40 00:15:40.698 clat percentiles (msec): 00:15:40.698 | 1.00th=[ 56], 5.00th=[ 271], 10.00th=[ 309], 20.00th=[ 338], 00:15:40.698 | 30.00th=[ 355], 40.00th=[ 359], 50.00th=[ 363], 60.00th=[ 372], 00:15:40.698 | 70.00th=[ 380], 80.00th=[ 384], 90.00th=[ 397], 95.00th=[ 447], 00:15:40.698 | 99.00th=[ 600], 99.50th=[ 609], 99.90th=[ 667], 99.95th=[ 667], 00:15:40.698 | 99.99th=[ 667] 00:15:40.698 bw ( KiB/s): min=24576, max=63488, per=5.90%, avg=44083.20, stdev=6629.83, samples=20 00:15:40.698 iops : min= 96, max= 248, avg=172.20, stdev=25.90, samples=20 00:15:40.698 lat (msec) : 20=0.11%, 50=0.73%, 100=1.62%, 250=1.79%, 500=91.20% 00:15:40.698 lat (msec) : 750=4.54% 00:15:40.698 cpu : usr=0.31%, sys=0.58%, ctx=1721, majf=0, minf=1 00:15:40.698 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:15:40.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.698 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:40.698 issued rwts: total=0,1785,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:40.698 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:40.698 job9: (groupid=0, jobs=1): err= 0: pid=86385: Tue Nov 19 01:56:50 2024 00:15:40.698 write: IOPS=186, BW=46.7MiB/s (48.9MB/s)(478MiB/10248msec); 0 zone resets 00:15:40.698 slat (usec): min=17, max=87872, avg=5232.35, stdev=9420.82 00:15:40.698 clat (msec): min=90, max=580, avg=337.44, stdev=43.60 00:15:40.698 lat (msec): min=90, max=580, avg=342.68, stdev=43.43 00:15:40.698 clat percentiles (msec): 00:15:40.698 | 1.00th=[ 131], 5.00th=[ 271], 10.00th=[ 300], 20.00th=[ 326], 00:15:40.698 | 30.00th=[ 334], 40.00th=[ 342], 50.00th=[ 347], 60.00th=[ 351], 00:15:40.698 | 70.00th=[ 355], 80.00th=[ 359], 90.00th=[ 363], 95.00th=[ 368], 00:15:40.698 | 99.00th=[ 468], 99.50th=[ 535], 99.90th=[ 584], 99.95th=[ 584], 00:15:40.698 | 99.99th=[ 584] 00:15:40.698 bw ( KiB/s): min=45056, max=53248, per=6.34%, avg=47334.40, stdev=2444.06, samples=20 00:15:40.698 iops : min= 176, max= 208, avg=184.90, stdev= 9.55, samples=20 00:15:40.698 lat (msec) : 100=0.21%, 250=3.24%, 500=95.82%, 750=0.73% 00:15:40.698 cpu : usr=0.34%, sys=0.58%, ctx=1301, majf=0, minf=1 00:15:40.698 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:15:40.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.698 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:40.698 issued rwts: total=0,1913,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:40.698 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:40.698 job10: (groupid=0, jobs=1): err= 0: pid=86386: Tue Nov 19 01:56:50 2024 00:15:40.698 write: IOPS=192, BW=48.1MiB/s (50.4MB/s)(493MiB/10251msec); 0 zone resets 00:15:40.698 slat (usec): min=16, max=55018, avg=4956.81, stdev=9041.19 00:15:40.698 clat (msec): min=22, max=595, avg=327.45, stdev=57.79 00:15:40.698 lat (msec): min=22, max=595, avg=332.41, stdev=58.27 00:15:40.698 clat percentiles (msec): 00:15:40.698 | 1.00th=[ 78], 5.00th=[ 222], 10.00th=[ 255], 20.00th=[ 321], 00:15:40.698 | 30.00th=[ 330], 40.00th=[ 338], 50.00th=[ 347], 60.00th=[ 351], 00:15:40.698 | 70.00th=[ 351], 80.00th=[ 355], 90.00th=[ 359], 95.00th=[ 363], 00:15:40.698 | 99.00th=[ 481], 99.50th=[ 550], 99.90th=[ 592], 99.95th=[ 592], 00:15:40.698 | 99.99th=[ 592] 00:15:40.698 bw ( KiB/s): min=44966, max=70144, per=6.54%, avg=48866.90, stdev=6127.17, samples=20 00:15:40.698 iops : min= 175, max= 274, avg=190.80, stdev=23.91, samples=20 00:15:40.698 lat (msec) : 50=0.41%, 100=0.61%, 250=7.35%, 500=90.72%, 750=0.91% 00:15:40.698 cpu : usr=0.43%, sys=0.58%, ctx=2564, majf=0, minf=1 00:15:40.698 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:15:40.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.698 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:15:40.698 issued rwts: total=0,1972,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:40.698 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:40.698 00:15:40.698 Run status group 0 (all jobs): 00:15:40.698 WRITE: bw=730MiB/s (765MB/s), 43.6MiB/s-133MiB/s (45.7MB/s-140MB/s), io=7487MiB (7850MB), run=10089-10262msec 00:15:40.698 00:15:40.698 Disk stats (read/write): 00:15:40.698 nvme0n1: ios=49/4780, merge=0/0, ticks=52/1206274, in_queue=1206326, util=97.71% 00:15:40.698 nvme10n1: ios=49/10635, merge=0/0, ticks=59/1215996, in_queue=1216055, util=97.99% 00:15:40.698 nvme1n1: ios=48/3699, merge=0/0, ticks=50/1237544, in_queue=1237594, util=98.20% 00:15:40.698 nvme2n1: ios=31/4820, merge=0/0, ticks=30/1206816, in_queue=1206846, util=98.00% 00:15:40.698 nvme3n1: ios=29/4776, merge=0/0, ticks=98/1207874, in_queue=1207972, util=98.34% 00:15:40.698 nvme4n1: ios=0/10602, merge=0/0, ticks=0/1215684, in_queue=1215684, util=98.18% 00:15:40.698 nvme5n1: ios=0/4757, merge=0/0, ticks=0/1207426, in_queue=1207426, util=98.33% 00:15:40.698 nvme6n1: ios=0/3666, merge=0/0, ticks=0/1238660, in_queue=1238660, util=98.53% 00:15:40.698 nvme7n1: ios=0/3555, merge=0/0, ticks=0/1236069, in_queue=1236069, util=98.61% 00:15:40.698 nvme8n1: ios=0/3805, merge=0/0, ticks=0/1236939, in_queue=1236939, util=98.71% 00:15:40.698 nvme9n1: ios=0/3934, merge=0/0, ticks=0/1238902, in_queue=1238902, util=98.91% 00:15:40.698 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:15:40.698 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:15:40.698 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:40.698 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:40.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.698 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:15:40.698 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:15:40.698 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:15:40.698 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:40.698 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:15:40.698 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:40.698 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:15:40.698 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:40.698 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.698 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:40.698 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.698 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:40.698 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:15:40.698 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:15:40.698 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:15:40.698 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:15:40.698 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:40.698 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:15:40.698 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:15:40.698 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:40.698 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:15:40.698 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:15:40.698 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.698 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:40.698 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.698 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:40.698 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:15:40.698 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:15:40.698 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:15:40.698 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:15:40.698 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:15:40.698 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:40.698 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:40.698 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:15:40.699 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:15:40.699 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:15:40.699 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.699 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:40.699 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.699 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:40.699 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:15:40.699 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:15:40.699 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:15:40.699 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:15:40.699 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:40.699 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:15:40.699 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:15:40.699 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:40.699 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:15:40.699 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:15:40.699 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.699 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:40.699 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.699 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:40.699 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:15:40.699 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:15:40.699 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:15:40.699 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:15:40.699 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:15:40.699 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:40.699 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:40.699 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:15:40.699 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:15:40.699 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:15:40.699 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.699 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:40.699 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.699 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:40.699 01:56:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:15:40.699 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:15:40.699 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:15:40.699 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:15:40.699 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.699 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:40.958 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.958 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:40.958 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:15:40.958 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:15:40.958 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:15:40.958 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:15:40.958 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:40.958 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:15:40.958 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:15:40.958 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:40.958 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:15:40.958 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:15:40.958 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.958 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:40.958 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.958 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:40.959 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:15:40.959 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:15:40.959 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:15:40.959 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:15:40.959 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:40.959 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:15:40.959 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:40.959 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:15:40.959 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:15:40.959 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:15:40.959 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.959 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:40.959 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.959 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:15:40.959 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:15:40.959 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:15:40.959 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:40.959 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:15:40.959 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:40.959 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:15:40.959 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:40.959 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:40.959 rmmod nvme_tcp 00:15:40.959 rmmod nvme_fabrics 00:15:40.959 rmmod nvme_keyring 00:15:40.959 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:40.959 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:15:40.959 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:15:40.959 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 85706 ']' 00:15:40.959 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 85706 00:15:40.959 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 85706 ']' 00:15:40.959 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 85706 00:15:40.959 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:15:40.959 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:40.959 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85706 00:15:41.218 killing process with pid 85706 00:15:41.218 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:41.218 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:41.218 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85706' 00:15:41.218 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 85706 00:15:41.218 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 85706 00:15:41.218 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:41.218 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:41.218 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:41.218 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:15:41.218 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:15:41.477 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:41.477 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:15:41.477 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:41.477 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:41.477 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:41.477 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:41.477 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:41.478 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:41.478 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:41.478 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:41.478 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:41.478 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:41.478 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:41.478 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:41.478 01:56:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:41.478 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:41.478 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:41.478 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:41.478 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.478 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:41.478 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.478 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@300 -- # return 0 00:15:41.478 00:15:41.478 real 0m48.675s 00:15:41.478 user 2m47.338s 00:15:41.478 sys 0m24.783s 00:15:41.478 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:41.478 ************************************ 00:15:41.478 END TEST nvmf_multiconnection 00:15:41.478 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:41.478 ************************************ 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:41.738 ************************************ 00:15:41.738 START TEST nvmf_initiator_timeout 00:15:41.738 ************************************ 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:15:41.738 * Looking for test storage... 00:15:41.738 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:41.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.738 --rc genhtml_branch_coverage=1 00:15:41.738 --rc genhtml_function_coverage=1 00:15:41.738 --rc genhtml_legend=1 00:15:41.738 --rc geninfo_all_blocks=1 00:15:41.738 --rc geninfo_unexecuted_blocks=1 00:15:41.738 00:15:41.738 ' 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:41.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.738 --rc genhtml_branch_coverage=1 00:15:41.738 --rc genhtml_function_coverage=1 00:15:41.738 --rc genhtml_legend=1 00:15:41.738 --rc geninfo_all_blocks=1 00:15:41.738 --rc geninfo_unexecuted_blocks=1 00:15:41.738 00:15:41.738 ' 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:41.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.738 --rc genhtml_branch_coverage=1 00:15:41.738 --rc genhtml_function_coverage=1 00:15:41.738 --rc genhtml_legend=1 00:15:41.738 --rc geninfo_all_blocks=1 00:15:41.738 --rc geninfo_unexecuted_blocks=1 00:15:41.738 00:15:41.738 ' 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:41.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.738 --rc genhtml_branch_coverage=1 00:15:41.738 --rc genhtml_function_coverage=1 00:15:41.738 --rc genhtml_legend=1 00:15:41.738 --rc geninfo_all_blocks=1 00:15:41.738 --rc geninfo_unexecuted_blocks=1 00:15:41.738 00:15:41.738 ' 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.738 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:41.739 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:41.739 Cannot find device "nvmf_init_br" 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:15:41.739 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:41.999 Cannot find device "nvmf_init_br2" 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:41.999 Cannot find device "nvmf_tgt_br" 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # true 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:41.999 Cannot find device "nvmf_tgt_br2" 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # true 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:41.999 Cannot find device "nvmf_init_br" 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # true 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:41.999 Cannot find device "nvmf_init_br2" 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # true 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:41.999 Cannot find device "nvmf_tgt_br" 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # true 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:41.999 Cannot find device "nvmf_tgt_br2" 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # true 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:41.999 Cannot find device "nvmf_br" 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # true 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:41.999 Cannot find device "nvmf_init_if" 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # true 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:41.999 Cannot find device "nvmf_init_if2" 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # true 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:41.999 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # true 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:41.999 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # true 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:41.999 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:42.277 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:42.277 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:15:42.277 00:15:42.277 --- 10.0.0.3 ping statistics --- 00:15:42.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.277 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:42.277 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:42.277 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:15:42.277 00:15:42.277 --- 10.0.0.4 ping statistics --- 00:15:42.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.277 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:42.277 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:42.277 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:15:42.277 00:15:42.277 --- 10.0.0.1 ping statistics --- 00:15:42.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.277 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:42.277 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:42.277 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:15:42.277 00:15:42.277 --- 10.0.0.2 ping statistics --- 00:15:42.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.277 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@461 -- # return 0 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=86802 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 86802 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 86802 ']' 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:42.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.277 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:42.278 01:56:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:42.278 [2024-11-19 01:56:52.823536] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:15:42.278 [2024-11-19 01:56:52.823640] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.575 [2024-11-19 01:56:52.964628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:42.575 [2024-11-19 01:56:52.985802] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:42.575 [2024-11-19 01:56:52.985893] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:42.575 [2024-11-19 01:56:52.985914] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:42.575 [2024-11-19 01:56:52.985927] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:42.575 [2024-11-19 01:56:52.985939] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:42.575 [2024-11-19 01:56:52.986906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.575 [2024-11-19 01:56:52.987607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:42.575 [2024-11-19 01:56:52.987696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:42.575 [2024-11-19 01:56:52.987708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.575 [2024-11-19 01:56:53.018950] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:42.575 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:42.575 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:15:42.575 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:42.575 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:42.575 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:42.575 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:42.575 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:42.575 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:42.575 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.575 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:42.575 Malloc0 00:15:42.575 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.575 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:15:42.575 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.575 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:42.575 Delay0 00:15:42.575 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.575 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:42.575 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.575 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:42.575 [2024-11-19 01:56:53.148893] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:42.575 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.575 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:42.575 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.575 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:42.575 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.575 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:42.575 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.575 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:42.575 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.575 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:42.576 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.576 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:42.576 [2024-11-19 01:56:53.177466] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:42.576 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.576 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid=7cdc77f7-6c10-48d3-83fa-703a290bdf89 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:15:42.834 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:15:42.834 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:15:42.834 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:42.834 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:42.834 01:56:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:15:44.739 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:44.739 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:44.739 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:44.739 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:44.739 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:44.739 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:15:44.739 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=86859 00:15:44.739 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:15:44.739 01:56:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:15:44.739 [global] 00:15:44.739 thread=1 00:15:44.739 invalidate=1 00:15:44.739 rw=write 00:15:44.739 time_based=1 00:15:44.739 runtime=60 00:15:44.739 ioengine=libaio 00:15:44.739 direct=1 00:15:44.739 bs=4096 00:15:44.739 iodepth=1 00:15:44.739 norandommap=0 00:15:44.739 numjobs=1 00:15:44.739 00:15:44.739 verify_dump=1 00:15:44.739 verify_backlog=512 00:15:44.739 verify_state_save=0 00:15:44.739 do_verify=1 00:15:44.739 verify=crc32c-intel 00:15:44.998 [job0] 00:15:44.998 filename=/dev/nvme0n1 00:15:44.998 Could not set queue depth (nvme0n1) 00:15:44.998 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:44.998 fio-3.35 00:15:44.998 Starting 1 thread 00:15:48.286 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:15:48.286 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.286 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:48.286 true 00:15:48.286 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.286 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:15:48.286 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.286 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:48.286 true 00:15:48.286 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.286 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:15:48.286 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.286 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:48.286 true 00:15:48.286 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.286 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:15:48.286 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.286 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:48.286 true 00:15:48.286 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.286 01:56:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:15:50.819 01:57:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:15:50.819 01:57:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.819 01:57:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:50.819 true 00:15:50.819 01:57:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.819 01:57:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:15:50.819 01:57:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.819 01:57:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:50.819 true 00:15:50.819 01:57:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.819 01:57:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:15:50.819 01:57:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.819 01:57:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:50.819 true 00:15:50.819 01:57:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.819 01:57:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:15:50.819 01:57:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.819 01:57:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:15:50.819 true 00:15:50.819 01:57:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.819 01:57:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:15:50.819 01:57:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 86859 00:16:47.051 00:16:47.051 job0: (groupid=0, jobs=1): err= 0: pid=86880: Tue Nov 19 01:57:55 2024 00:16:47.051 read: IOPS=837, BW=3351KiB/s (3432kB/s)(196MiB/60000msec) 00:16:47.051 slat (usec): min=10, max=20881, avg=14.51, stdev=106.69 00:16:47.051 clat (usec): min=152, max=40471k, avg=1000.11, stdev=180506.43 00:16:47.051 lat (usec): min=164, max=40472k, avg=1014.62, stdev=180506.45 00:16:47.051 clat percentiles (usec): 00:16:47.051 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 176], 00:16:47.051 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 194], 00:16:47.051 | 70.00th=[ 202], 80.00th=[ 210], 90.00th=[ 225], 95.00th=[ 239], 00:16:47.051 | 99.00th=[ 293], 99.50th=[ 326], 99.90th=[ 441], 99.95th=[ 570], 00:16:47.051 | 99.99th=[ 857] 00:16:47.051 write: IOPS=844, BW=3379KiB/s (3460kB/s)(198MiB/60000msec); 0 zone resets 00:16:47.051 slat (usec): min=14, max=616, avg=20.89, stdev= 6.50 00:16:47.051 clat (usec): min=115, max=1207, avg=152.99, stdev=25.98 00:16:47.051 lat (usec): min=133, max=1226, avg=173.88, stdev=27.45 00:16:47.051 clat percentiles (usec): 00:16:47.051 | 1.00th=[ 127], 5.00th=[ 131], 10.00th=[ 135], 20.00th=[ 139], 00:16:47.051 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 153], 00:16:47.051 | 70.00th=[ 157], 80.00th=[ 165], 90.00th=[ 178], 95.00th=[ 190], 00:16:47.051 | 99.00th=[ 223], 99.50th=[ 258], 99.90th=[ 441], 99.95th=[ 529], 00:16:47.051 | 99.99th=[ 807] 00:16:47.051 bw ( KiB/s): min= 4096, max=12288, per=100.00%, avg=10153.44, stdev=1938.97, samples=39 00:16:47.051 iops : min= 1024, max= 3072, avg=2538.36, stdev=484.74, samples=39 00:16:47.051 lat (usec) : 250=98.20%, 500=1.73%, 750=0.05%, 1000=0.01% 00:16:47.051 lat (msec) : 2=0.01%, >=2000=0.01% 00:16:47.051 cpu : usr=0.70%, sys=2.29%, ctx=101003, majf=0, minf=5 00:16:47.051 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:47.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.051 issued rwts: total=50270,50688,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:47.051 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:47.051 00:16:47.051 Run status group 0 (all jobs): 00:16:47.051 READ: bw=3351KiB/s (3432kB/s), 3351KiB/s-3351KiB/s (3432kB/s-3432kB/s), io=196MiB (206MB), run=60000-60000msec 00:16:47.051 WRITE: bw=3379KiB/s (3460kB/s), 3379KiB/s-3379KiB/s (3460kB/s-3460kB/s), io=198MiB (208MB), run=60000-60000msec 00:16:47.051 00:16:47.051 Disk stats (read/write): 00:16:47.051 nvme0n1: ios=50457/50246, merge=0/0, ticks=10083/8159, in_queue=18242, util=99.73% 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:47.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:47.051 nvmf hotplug test: fio successful as expected 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:47.051 rmmod nvme_tcp 00:16:47.051 rmmod nvme_fabrics 00:16:47.051 rmmod nvme_keyring 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 86802 ']' 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 86802 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 86802 ']' 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 86802 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86802 00:16:47.051 killing process with pid 86802 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86802' 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 86802 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 86802 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:16:47.051 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:16:47.051 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:47.051 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:47.051 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@300 -- # return 0 00:16:47.052 00:16:47.052 real 1m4.091s 00:16:47.052 user 3m48.193s 00:16:47.052 sys 0m24.032s 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:47.052 ************************************ 00:16:47.052 END TEST nvmf_initiator_timeout 00:16:47.052 ************************************ 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:47.052 ************************************ 00:16:47.052 START TEST nvmf_nsid 00:16:47.052 ************************************ 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:16:47.052 * Looking for test storage... 00:16:47.052 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:47.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.052 --rc genhtml_branch_coverage=1 00:16:47.052 --rc genhtml_function_coverage=1 00:16:47.052 --rc genhtml_legend=1 00:16:47.052 --rc geninfo_all_blocks=1 00:16:47.052 --rc geninfo_unexecuted_blocks=1 00:16:47.052 00:16:47.052 ' 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:47.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.052 --rc genhtml_branch_coverage=1 00:16:47.052 --rc genhtml_function_coverage=1 00:16:47.052 --rc genhtml_legend=1 00:16:47.052 --rc geninfo_all_blocks=1 00:16:47.052 --rc geninfo_unexecuted_blocks=1 00:16:47.052 00:16:47.052 ' 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:47.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.052 --rc genhtml_branch_coverage=1 00:16:47.052 --rc genhtml_function_coverage=1 00:16:47.052 --rc genhtml_legend=1 00:16:47.052 --rc geninfo_all_blocks=1 00:16:47.052 --rc geninfo_unexecuted_blocks=1 00:16:47.052 00:16:47.052 ' 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:47.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.052 --rc genhtml_branch_coverage=1 00:16:47.052 --rc genhtml_function_coverage=1 00:16:47.052 --rc genhtml_legend=1 00:16:47.052 --rc geninfo_all_blocks=1 00:16:47.052 --rc geninfo_unexecuted_blocks=1 00:16:47.052 00:16:47.052 ' 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:47.052 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:47.053 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:47.053 Cannot find device "nvmf_init_br" 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:47.053 Cannot find device "nvmf_init_br2" 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:47.053 Cannot find device "nvmf_tgt_br" 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:47.053 Cannot find device "nvmf_tgt_br2" 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:47.053 Cannot find device "nvmf_init_br" 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:47.053 Cannot find device "nvmf_init_br2" 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:47.053 Cannot find device "nvmf_tgt_br" 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:47.053 Cannot find device "nvmf_tgt_br2" 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:47.053 Cannot find device "nvmf_br" 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:47.053 Cannot find device "nvmf_init_if" 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:47.053 Cannot find device "nvmf_init_if2" 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:47.053 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:47.053 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:47.053 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:47.054 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:47.054 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:16:47.054 00:16:47.054 --- 10.0.0.3 ping statistics --- 00:16:47.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.054 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:47.054 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:47.054 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:16:47.054 00:16:47.054 --- 10.0.0.4 ping statistics --- 00:16:47.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.054 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:47.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:47.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:16:47.054 00:16:47.054 --- 10.0.0.1 ping statistics --- 00:16:47.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.054 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:47.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:47.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:16:47.054 00:16:47.054 --- 10.0.0.2 ping statistics --- 00:16:47.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.054 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=87750 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 87750 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 87750 ']' 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:47.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:47.054 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:47.054 [2024-11-19 01:57:56.945348] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:16:47.054 [2024-11-19 01:57:56.945443] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:47.054 [2024-11-19 01:57:57.091602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.054 [2024-11-19 01:57:57.110379] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:47.054 [2024-11-19 01:57:57.110448] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:47.054 [2024-11-19 01:57:57.110473] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:47.054 [2024-11-19 01:57:57.110480] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:47.054 [2024-11-19 01:57:57.110486] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:47.054 [2024-11-19 01:57:57.110753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.054 [2024-11-19 01:57:57.137768] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:47.054 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:47.054 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:16:47.054 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:47.054 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:47.054 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:47.054 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:47.054 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:47.054 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=87769 00:16:47.054 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:16:47.054 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:16:47.054 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:16:47.054 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:16:47.054 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:47.054 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:47.054 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:47.054 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:47.054 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:47.054 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:47.054 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:47.054 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:47.054 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:47.054 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:16:47.054 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:16:47.054 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=042dff7a-6f08-4cc2-9755-e1a056a8084b 00:16:47.054 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:16:47.054 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=70201a39-99f1-40b2-9086-26e232a7dfdf 00:16:47.054 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:16:47.054 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=f17e4e83-f751-47c6-90f9-386ebc1b7a7f 00:16:47.054 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:16:47.054 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.054 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:47.054 null0 00:16:47.054 null1 00:16:47.054 null2 00:16:47.054 [2024-11-19 01:57:57.300266] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:47.054 [2024-11-19 01:57:57.317382] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:16:47.054 [2024-11-19 01:57:57.317487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87769 ] 00:16:47.054 [2024-11-19 01:57:57.324406] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:47.055 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.055 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 87769 /var/tmp/tgt2.sock 00:16:47.055 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 87769 ']' 00:16:47.055 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:16:47.055 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:47.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:16:47.055 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:16:47.055 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:47.055 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:47.055 [2024-11-19 01:57:57.459479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.055 [2024-11-19 01:57:57.479036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:47.055 [2024-11-19 01:57:57.514581] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:47.055 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:47.055 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:16:47.055 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:16:47.625 [2024-11-19 01:57:58.066608] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:47.625 [2024-11-19 01:57:58.082681] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:16:47.625 nvme0n1 nvme0n2 00:16:47.625 nvme1n1 00:16:47.625 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:16:47.625 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:16:47.625 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:16:47.884 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:16:47.884 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:16:47.884 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:16:47.884 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:16:47.884 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:16:47.884 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:16:47.884 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:16:47.884 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:16:47.884 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:47.884 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:47.884 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:16:47.884 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:16:47.884 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:16:48.820 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:48.820 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:48.820 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:48.820 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:16:48.820 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:16:48.820 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 042dff7a-6f08-4cc2-9755-e1a056a8084b 00:16:48.820 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:16:48.820 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:16:48.820 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:16:48.820 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:16:48.820 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:16:48.820 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=042dff7a6f084cc29755e1a056a8084b 00:16:48.820 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 042DFF7A6F084CC29755E1A056A8084B 00:16:48.820 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 042DFF7A6F084CC29755E1A056A8084B == \0\4\2\D\F\F\7\A\6\F\0\8\4\C\C\2\9\7\5\5\E\1\A\0\5\6\A\8\0\8\4\B ]] 00:16:48.820 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:16:48.820 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:16:48.820 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:48.820 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:16:48.820 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:48.820 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:16:48.820 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:16:48.820 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 70201a39-99f1-40b2-9086-26e232a7dfdf 00:16:48.820 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:16:48.820 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:16:48.820 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:16:48.820 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:16:48.820 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:16:49.078 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=70201a3999f140b2908626e232a7dfdf 00:16:49.078 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 70201A3999F140B2908626E232A7DFDF 00:16:49.078 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 70201A3999F140B2908626E232A7DFDF == \7\0\2\0\1\A\3\9\9\9\F\1\4\0\B\2\9\0\8\6\2\6\E\2\3\2\A\7\D\F\D\F ]] 00:16:49.078 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:16:49.078 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:16:49.078 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:49.078 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:16:49.078 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:49.078 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:16:49.078 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:16:49.078 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid f17e4e83-f751-47c6-90f9-386ebc1b7a7f 00:16:49.078 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:16:49.078 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:16:49.078 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:16:49.078 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:16:49.078 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:16:49.078 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=f17e4e83f75147c690f9386ebc1b7a7f 00:16:49.079 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo F17E4E83F75147C690F9386EBC1B7A7F 00:16:49.079 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ F17E4E83F75147C690F9386EBC1B7A7F == \F\1\7\E\4\E\8\3\F\7\5\1\4\7\C\6\9\0\F\9\3\8\6\E\B\C\1\B\7\A\7\F ]] 00:16:49.079 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:16:49.079 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:16:49.079 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:16:49.079 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 87769 00:16:49.079 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 87769 ']' 00:16:49.079 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 87769 00:16:49.079 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:16:49.079 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:49.079 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87769 00:16:49.338 killing process with pid 87769 00:16:49.338 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:49.338 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:49.338 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87769' 00:16:49.338 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 87769 00:16:49.338 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 87769 00:16:49.338 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:16:49.338 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:49.338 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:16:49.597 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:49.597 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:16:49.597 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:49.597 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:49.597 rmmod nvme_tcp 00:16:49.597 rmmod nvme_fabrics 00:16:49.597 rmmod nvme_keyring 00:16:49.597 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:49.597 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:16:49.597 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:16:49.597 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 87750 ']' 00:16:49.597 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 87750 00:16:49.597 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 87750 ']' 00:16:49.597 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 87750 00:16:49.597 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:16:49.597 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:49.597 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87750 00:16:49.597 killing process with pid 87750 00:16:49.597 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:49.597 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:49.597 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87750' 00:16:49.597 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 87750 00:16:49.597 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 87750 00:16:49.597 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:49.597 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:49.597 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:49.597 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:16:49.597 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:16:49.597 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:49.597 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:16:49.856 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:49.856 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:49.856 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:49.856 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:49.856 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:49.856 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:49.856 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:49.856 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:49.856 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:49.856 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:49.856 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:49.856 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:49.856 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:49.856 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:49.856 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:49.856 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:49.856 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.856 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:49.856 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.856 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:16:49.856 00:16:49.856 real 0m4.174s 00:16:49.856 user 0m6.186s 00:16:49.856 sys 0m1.555s 00:16:49.856 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:49.856 ************************************ 00:16:49.856 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:49.856 END TEST nvmf_nsid 00:16:49.856 ************************************ 00:16:50.116 01:58:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:16:50.116 00:16:50.116 real 6m50.760s 00:16:50.116 user 16m58.952s 00:16:50.116 sys 1m55.155s 00:16:50.116 ************************************ 00:16:50.116 END TEST nvmf_target_extra 00:16:50.116 ************************************ 00:16:50.116 01:58:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:50.116 01:58:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:50.116 01:58:00 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:16:50.116 01:58:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:50.116 01:58:00 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:50.116 01:58:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:50.116 ************************************ 00:16:50.116 START TEST nvmf_host 00:16:50.116 ************************************ 00:16:50.116 01:58:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:16:50.116 * Looking for test storage... 00:16:50.116 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:16:50.116 01:58:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:50.116 01:58:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:16:50.116 01:58:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:50.116 01:58:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:50.116 01:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:50.116 01:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:50.116 01:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:50.116 01:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:16:50.116 01:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:16:50.116 01:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:16:50.116 01:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:16:50.116 01:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:16:50.116 01:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:16:50.116 01:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:16:50.116 01:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:50.116 01:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:16:50.116 01:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:16:50.116 01:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:50.116 01:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:50.116 01:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:16:50.116 01:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:16:50.116 01:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:50.116 01:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:16:50.116 01:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:50.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.117 --rc genhtml_branch_coverage=1 00:16:50.117 --rc genhtml_function_coverage=1 00:16:50.117 --rc genhtml_legend=1 00:16:50.117 --rc geninfo_all_blocks=1 00:16:50.117 --rc geninfo_unexecuted_blocks=1 00:16:50.117 00:16:50.117 ' 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:50.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.117 --rc genhtml_branch_coverage=1 00:16:50.117 --rc genhtml_function_coverage=1 00:16:50.117 --rc genhtml_legend=1 00:16:50.117 --rc geninfo_all_blocks=1 00:16:50.117 --rc geninfo_unexecuted_blocks=1 00:16:50.117 00:16:50.117 ' 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:50.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.117 --rc genhtml_branch_coverage=1 00:16:50.117 --rc genhtml_function_coverage=1 00:16:50.117 --rc genhtml_legend=1 00:16:50.117 --rc geninfo_all_blocks=1 00:16:50.117 --rc geninfo_unexecuted_blocks=1 00:16:50.117 00:16:50.117 ' 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:50.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.117 --rc genhtml_branch_coverage=1 00:16:50.117 --rc genhtml_function_coverage=1 00:16:50.117 --rc genhtml_legend=1 00:16:50.117 --rc geninfo_all_blocks=1 00:16:50.117 --rc geninfo_unexecuted_blocks=1 00:16:50.117 00:16:50.117 ' 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:16:50.117 01:58:00 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:50.378 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.378 ************************************ 00:16:50.378 START TEST nvmf_identify 00:16:50.378 ************************************ 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:50.378 * Looking for test storage... 00:16:50.378 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:50.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.378 --rc genhtml_branch_coverage=1 00:16:50.378 --rc genhtml_function_coverage=1 00:16:50.378 --rc genhtml_legend=1 00:16:50.378 --rc geninfo_all_blocks=1 00:16:50.378 --rc geninfo_unexecuted_blocks=1 00:16:50.378 00:16:50.378 ' 00:16:50.378 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:50.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.378 --rc genhtml_branch_coverage=1 00:16:50.378 --rc genhtml_function_coverage=1 00:16:50.378 --rc genhtml_legend=1 00:16:50.378 --rc geninfo_all_blocks=1 00:16:50.378 --rc geninfo_unexecuted_blocks=1 00:16:50.378 00:16:50.378 ' 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:50.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.379 --rc genhtml_branch_coverage=1 00:16:50.379 --rc genhtml_function_coverage=1 00:16:50.379 --rc genhtml_legend=1 00:16:50.379 --rc geninfo_all_blocks=1 00:16:50.379 --rc geninfo_unexecuted_blocks=1 00:16:50.379 00:16:50.379 ' 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:50.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.379 --rc genhtml_branch_coverage=1 00:16:50.379 --rc genhtml_function_coverage=1 00:16:50.379 --rc genhtml_legend=1 00:16:50.379 --rc geninfo_all_blocks=1 00:16:50.379 --rc geninfo_unexecuted_blocks=1 00:16:50.379 00:16:50.379 ' 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:50.379 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:50.379 Cannot find device "nvmf_init_br" 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:50.379 Cannot find device "nvmf_init_br2" 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:16:50.379 01:58:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:50.639 Cannot find device "nvmf_tgt_br" 00:16:50.639 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:16:50.639 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:50.639 Cannot find device "nvmf_tgt_br2" 00:16:50.639 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:16:50.639 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:50.639 Cannot find device "nvmf_init_br" 00:16:50.639 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:16:50.639 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:50.639 Cannot find device "nvmf_init_br2" 00:16:50.639 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:16:50.639 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:50.639 Cannot find device "nvmf_tgt_br" 00:16:50.639 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:16:50.639 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:50.639 Cannot find device "nvmf_tgt_br2" 00:16:50.639 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:16:50.639 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:50.639 Cannot find device "nvmf_br" 00:16:50.639 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:16:50.639 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:50.639 Cannot find device "nvmf_init_if" 00:16:50.639 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:16:50.639 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:50.639 Cannot find device "nvmf_init_if2" 00:16:50.639 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:16:50.639 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:50.639 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:50.639 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:16:50.639 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:50.639 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:50.639 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:16:50.639 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:50.639 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:50.639 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:50.639 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:50.639 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:50.639 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:50.639 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:50.639 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:50.639 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:50.639 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:50.639 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:50.639 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:50.639 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:50.639 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:50.640 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:50.640 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:50.640 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:50.640 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:50.640 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:50.640 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:50.640 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:50.640 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:50.899 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:50.899 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:50.899 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:50.899 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:50.899 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:50.899 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:50.899 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:50.899 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:50.899 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:50.899 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:50.899 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:50.899 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:50.899 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:16:50.899 00:16:50.899 --- 10.0.0.3 ping statistics --- 00:16:50.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.899 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:16:50.899 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:50.899 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:50.899 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:16:50.899 00:16:50.899 --- 10.0.0.4 ping statistics --- 00:16:50.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.899 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:16:50.899 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:50.899 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:50.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:16:50.899 00:16:50.899 --- 10.0.0.1 ping statistics --- 00:16:50.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.899 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:16:50.899 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:50.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:50.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:16:50.899 00:16:50.899 --- 10.0.0.2 ping statistics --- 00:16:50.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.899 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:16:50.899 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:50.899 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:16:50.899 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:50.899 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:50.899 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:50.899 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:50.899 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:50.899 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:50.899 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:50.899 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:16:50.899 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:50.899 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:50.899 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=88122 00:16:50.899 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:50.899 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:50.899 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 88122 00:16:50.899 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 88122 ']' 00:16:50.899 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.899 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:50.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.899 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.899 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:50.899 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:50.899 [2024-11-19 01:58:01.442733] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:16:50.900 [2024-11-19 01:58:01.442824] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.158 [2024-11-19 01:58:01.595688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:51.158 [2024-11-19 01:58:01.620674] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.158 [2024-11-19 01:58:01.620993] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.158 [2024-11-19 01:58:01.621170] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:51.158 [2024-11-19 01:58:01.621330] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:51.158 [2024-11-19 01:58:01.621386] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.158 [2024-11-19 01:58:01.622394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.158 [2024-11-19 01:58:01.622545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:51.158 [2024-11-19 01:58:01.622616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.158 [2024-11-19 01:58:01.622615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:51.158 [2024-11-19 01:58:01.655235] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:51.158 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:51.158 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:16:51.158 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:51.158 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.158 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:51.158 [2024-11-19 01:58:01.706072] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:51.158 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.158 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:16:51.158 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:51.158 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:51.158 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:51.158 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.158 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:51.419 Malloc0 00:16:51.419 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.419 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:51.419 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.419 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:51.419 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.419 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:16:51.419 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.419 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:51.419 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.419 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:51.419 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.419 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:51.419 [2024-11-19 01:58:01.795455] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:51.419 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.419 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:16:51.419 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.419 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:51.419 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.419 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:16:51.419 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.419 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:51.419 [ 00:16:51.419 { 00:16:51.419 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:51.419 "subtype": "Discovery", 00:16:51.419 "listen_addresses": [ 00:16:51.419 { 00:16:51.419 "trtype": "TCP", 00:16:51.419 "adrfam": "IPv4", 00:16:51.419 "traddr": "10.0.0.3", 00:16:51.419 "trsvcid": "4420" 00:16:51.419 } 00:16:51.419 ], 00:16:51.419 "allow_any_host": true, 00:16:51.419 "hosts": [] 00:16:51.419 }, 00:16:51.419 { 00:16:51.419 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.419 "subtype": "NVMe", 00:16:51.419 "listen_addresses": [ 00:16:51.419 { 00:16:51.419 "trtype": "TCP", 00:16:51.419 "adrfam": "IPv4", 00:16:51.419 "traddr": "10.0.0.3", 00:16:51.419 "trsvcid": "4420" 00:16:51.419 } 00:16:51.419 ], 00:16:51.419 "allow_any_host": true, 00:16:51.419 "hosts": [], 00:16:51.419 "serial_number": "SPDK00000000000001", 00:16:51.419 "model_number": "SPDK bdev Controller", 00:16:51.419 "max_namespaces": 32, 00:16:51.419 "min_cntlid": 1, 00:16:51.419 "max_cntlid": 65519, 00:16:51.419 "namespaces": [ 00:16:51.419 { 00:16:51.419 "nsid": 1, 00:16:51.419 "bdev_name": "Malloc0", 00:16:51.419 "name": "Malloc0", 00:16:51.419 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:16:51.419 "eui64": "ABCDEF0123456789", 00:16:51.419 "uuid": "ce96616e-3667-46ce-a5e0-efb682d309ba" 00:16:51.419 } 00:16:51.419 ] 00:16:51.419 } 00:16:51.419 ] 00:16:51.419 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.419 01:58:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:16:51.419 [2024-11-19 01:58:01.848328] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:16:51.419 [2024-11-19 01:58:01.848495] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88150 ] 00:16:51.419 [2024-11-19 01:58:02.013429] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:16:51.419 [2024-11-19 01:58:02.013519] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:51.419 [2024-11-19 01:58:02.013528] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:51.419 [2024-11-19 01:58:02.013542] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:51.419 [2024-11-19 01:58:02.013553] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:51.419 [2024-11-19 01:58:02.013924] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:16:51.419 [2024-11-19 01:58:02.014008] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1d079f0 0 00:16:51.419 [2024-11-19 01:58:02.019525] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:51.419 [2024-11-19 01:58:02.019554] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:51.419 [2024-11-19 01:58:02.019563] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:51.419 [2024-11-19 01:58:02.019567] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:51.419 [2024-11-19 01:58:02.019604] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.419 [2024-11-19 01:58:02.019612] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.419 [2024-11-19 01:58:02.019618] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d079f0) 00:16:51.419 [2024-11-19 01:58:02.019634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:51.419 [2024-11-19 01:58:02.019684] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d406c0, cid 0, qid 0 00:16:51.419 [2024-11-19 01:58:02.027518] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.419 [2024-11-19 01:58:02.027546] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.419 [2024-11-19 01:58:02.027553] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.419 [2024-11-19 01:58:02.027559] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d406c0) on tqpair=0x1d079f0 00:16:51.419 [2024-11-19 01:58:02.027578] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:51.419 [2024-11-19 01:58:02.027589] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:16:51.419 [2024-11-19 01:58:02.027597] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:16:51.419 [2024-11-19 01:58:02.027616] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.419 [2024-11-19 01:58:02.027622] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.419 [2024-11-19 01:58:02.027627] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d079f0) 00:16:51.419 [2024-11-19 01:58:02.027639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.419 [2024-11-19 01:58:02.027684] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d406c0, cid 0, qid 0 00:16:51.419 [2024-11-19 01:58:02.027746] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.419 [2024-11-19 01:58:02.027755] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.419 [2024-11-19 01:58:02.027760] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.419 [2024-11-19 01:58:02.027765] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d406c0) on tqpair=0x1d079f0 00:16:51.419 [2024-11-19 01:58:02.027773] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:16:51.419 [2024-11-19 01:58:02.027783] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:16:51.419 [2024-11-19 01:58:02.027793] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.419 [2024-11-19 01:58:02.027798] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.419 [2024-11-19 01:58:02.027803] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d079f0) 00:16:51.419 [2024-11-19 01:58:02.027813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.419 [2024-11-19 01:58:02.027835] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d406c0, cid 0, qid 0 00:16:51.419 [2024-11-19 01:58:02.027878] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.419 [2024-11-19 01:58:02.027887] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.419 [2024-11-19 01:58:02.027892] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.419 [2024-11-19 01:58:02.027897] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d406c0) on tqpair=0x1d079f0 00:16:51.419 [2024-11-19 01:58:02.027905] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:16:51.419 [2024-11-19 01:58:02.027916] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:16:51.419 [2024-11-19 01:58:02.027925] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.419 [2024-11-19 01:58:02.027931] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.419 [2024-11-19 01:58:02.027936] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d079f0) 00:16:51.420 [2024-11-19 01:58:02.027945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.420 [2024-11-19 01:58:02.027966] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d406c0, cid 0, qid 0 00:16:51.420 [2024-11-19 01:58:02.028006] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.420 [2024-11-19 01:58:02.028015] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.420 [2024-11-19 01:58:02.028019] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.420 [2024-11-19 01:58:02.028025] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d406c0) on tqpair=0x1d079f0 00:16:51.420 [2024-11-19 01:58:02.028032] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:51.420 [2024-11-19 01:58:02.028045] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.420 [2024-11-19 01:58:02.028051] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.420 [2024-11-19 01:58:02.028056] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d079f0) 00:16:51.420 [2024-11-19 01:58:02.028066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.420 [2024-11-19 01:58:02.028086] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d406c0, cid 0, qid 0 00:16:51.420 [2024-11-19 01:58:02.028129] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.420 [2024-11-19 01:58:02.028137] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.420 [2024-11-19 01:58:02.028142] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.420 [2024-11-19 01:58:02.028147] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d406c0) on tqpair=0x1d079f0 00:16:51.420 [2024-11-19 01:58:02.028154] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:16:51.420 [2024-11-19 01:58:02.028160] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:16:51.420 [2024-11-19 01:58:02.028171] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:51.420 [2024-11-19 01:58:02.028283] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:16:51.420 [2024-11-19 01:58:02.028291] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:51.420 [2024-11-19 01:58:02.028302] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.420 [2024-11-19 01:58:02.028308] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.420 [2024-11-19 01:58:02.028313] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d079f0) 00:16:51.420 [2024-11-19 01:58:02.028322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.420 [2024-11-19 01:58:02.028344] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d406c0, cid 0, qid 0 00:16:51.420 [2024-11-19 01:58:02.028387] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.420 [2024-11-19 01:58:02.028396] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.420 [2024-11-19 01:58:02.028401] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.420 [2024-11-19 01:58:02.028406] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d406c0) on tqpair=0x1d079f0 00:16:51.420 [2024-11-19 01:58:02.028413] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:51.420 [2024-11-19 01:58:02.028425] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.420 [2024-11-19 01:58:02.028431] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.420 [2024-11-19 01:58:02.028436] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d079f0) 00:16:51.420 [2024-11-19 01:58:02.028446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.420 [2024-11-19 01:58:02.028466] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d406c0, cid 0, qid 0 00:16:51.420 [2024-11-19 01:58:02.028532] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.420 [2024-11-19 01:58:02.028543] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.420 [2024-11-19 01:58:02.028547] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.420 [2024-11-19 01:58:02.028553] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d406c0) on tqpair=0x1d079f0 00:16:51.420 [2024-11-19 01:58:02.028559] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:51.420 [2024-11-19 01:58:02.028566] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:16:51.420 [2024-11-19 01:58:02.028577] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:16:51.420 [2024-11-19 01:58:02.028595] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:16:51.420 [2024-11-19 01:58:02.028608] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.420 [2024-11-19 01:58:02.028614] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d079f0) 00:16:51.420 [2024-11-19 01:58:02.028624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.420 [2024-11-19 01:58:02.028648] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d406c0, cid 0, qid 0 00:16:51.420 [2024-11-19 01:58:02.028734] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:51.420 [2024-11-19 01:58:02.028744] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:51.420 [2024-11-19 01:58:02.028749] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:51.420 [2024-11-19 01:58:02.028754] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d079f0): datao=0, datal=4096, cccid=0 00:16:51.420 [2024-11-19 01:58:02.028761] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d406c0) on tqpair(0x1d079f0): expected_datao=0, payload_size=4096 00:16:51.420 [2024-11-19 01:58:02.028767] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.420 [2024-11-19 01:58:02.028778] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:51.420 [2024-11-19 01:58:02.028783] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:51.420 [2024-11-19 01:58:02.028794] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.420 [2024-11-19 01:58:02.028802] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.420 [2024-11-19 01:58:02.028807] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.420 [2024-11-19 01:58:02.028813] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d406c0) on tqpair=0x1d079f0 00:16:51.420 [2024-11-19 01:58:02.028823] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:16:51.420 [2024-11-19 01:58:02.028830] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:16:51.420 [2024-11-19 01:58:02.028836] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:16:51.420 [2024-11-19 01:58:02.028843] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:16:51.420 [2024-11-19 01:58:02.028850] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:16:51.420 [2024-11-19 01:58:02.028856] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:16:51.420 [2024-11-19 01:58:02.028873] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:16:51.420 [2024-11-19 01:58:02.028883] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.420 [2024-11-19 01:58:02.028889] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.420 [2024-11-19 01:58:02.028894] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d079f0) 00:16:51.420 [2024-11-19 01:58:02.028904] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:51.420 [2024-11-19 01:58:02.028927] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d406c0, cid 0, qid 0 00:16:51.420 [2024-11-19 01:58:02.028980] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.420 [2024-11-19 01:58:02.028989] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.420 [2024-11-19 01:58:02.028994] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.420 [2024-11-19 01:58:02.028999] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d406c0) on tqpair=0x1d079f0 00:16:51.420 [2024-11-19 01:58:02.029009] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.420 [2024-11-19 01:58:02.029014] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.420 [2024-11-19 01:58:02.029019] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d079f0) 00:16:51.420 [2024-11-19 01:58:02.029028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.420 [2024-11-19 01:58:02.029036] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.420 [2024-11-19 01:58:02.029041] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.420 [2024-11-19 01:58:02.029046] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1d079f0) 00:16:51.420 [2024-11-19 01:58:02.029053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.420 [2024-11-19 01:58:02.029061] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.420 [2024-11-19 01:58:02.029066] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.420 [2024-11-19 01:58:02.029071] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1d079f0) 00:16:51.420 [2024-11-19 01:58:02.029079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.420 [2024-11-19 01:58:02.029087] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.420 [2024-11-19 01:58:02.029092] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.420 [2024-11-19 01:58:02.029096] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d079f0) 00:16:51.420 [2024-11-19 01:58:02.029104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.420 [2024-11-19 01:58:02.029111] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:51.420 [2024-11-19 01:58:02.029126] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:51.420 [2024-11-19 01:58:02.029136] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.420 [2024-11-19 01:58:02.029141] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d079f0) 00:16:51.421 [2024-11-19 01:58:02.029150] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.421 [2024-11-19 01:58:02.029173] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d406c0, cid 0, qid 0 00:16:51.421 [2024-11-19 01:58:02.029182] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d40840, cid 1, qid 0 00:16:51.421 [2024-11-19 01:58:02.029189] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d409c0, cid 2, qid 0 00:16:51.421 [2024-11-19 01:58:02.029195] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d40b40, cid 3, qid 0 00:16:51.421 [2024-11-19 01:58:02.029201] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d40cc0, cid 4, qid 0 00:16:51.421 [2024-11-19 01:58:02.029272] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.421 [2024-11-19 01:58:02.029281] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.421 [2024-11-19 01:58:02.029286] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.421 [2024-11-19 01:58:02.029291] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d40cc0) on tqpair=0x1d079f0 00:16:51.421 [2024-11-19 01:58:02.029298] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:16:51.421 [2024-11-19 01:58:02.029305] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:16:51.421 [2024-11-19 01:58:02.029319] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.421 [2024-11-19 01:58:02.029325] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d079f0) 00:16:51.421 [2024-11-19 01:58:02.029334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.421 [2024-11-19 01:58:02.029355] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d40cc0, cid 4, qid 0 00:16:51.421 [2024-11-19 01:58:02.029411] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:51.421 [2024-11-19 01:58:02.029420] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:51.421 [2024-11-19 01:58:02.029424] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:51.421 [2024-11-19 01:58:02.029429] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d079f0): datao=0, datal=4096, cccid=4 00:16:51.421 [2024-11-19 01:58:02.029436] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d40cc0) on tqpair(0x1d079f0): expected_datao=0, payload_size=4096 00:16:51.421 [2024-11-19 01:58:02.029442] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.421 [2024-11-19 01:58:02.029451] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:51.421 [2024-11-19 01:58:02.029457] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:51.421 [2024-11-19 01:58:02.029467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.421 [2024-11-19 01:58:02.029475] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.421 [2024-11-19 01:58:02.029480] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.421 [2024-11-19 01:58:02.029485] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d40cc0) on tqpair=0x1d079f0 00:16:51.421 [2024-11-19 01:58:02.029516] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:16:51.421 [2024-11-19 01:58:02.029554] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.421 [2024-11-19 01:58:02.029561] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d079f0) 00:16:51.421 [2024-11-19 01:58:02.029571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.421 [2024-11-19 01:58:02.029581] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.421 [2024-11-19 01:58:02.029586] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.421 [2024-11-19 01:58:02.029591] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d079f0) 00:16:51.421 [2024-11-19 01:58:02.029599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.421 [2024-11-19 01:58:02.029639] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d40cc0, cid 4, qid 0 00:16:51.421 [2024-11-19 01:58:02.029648] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d40e40, cid 5, qid 0 00:16:51.421 [2024-11-19 01:58:02.029747] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:51.421 [2024-11-19 01:58:02.029758] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:51.421 [2024-11-19 01:58:02.029763] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:51.421 [2024-11-19 01:58:02.029768] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d079f0): datao=0, datal=1024, cccid=4 00:16:51.421 [2024-11-19 01:58:02.029774] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d40cc0) on tqpair(0x1d079f0): expected_datao=0, payload_size=1024 00:16:51.421 [2024-11-19 01:58:02.029780] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.421 [2024-11-19 01:58:02.029789] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:51.421 [2024-11-19 01:58:02.029794] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:51.421 [2024-11-19 01:58:02.029802] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.421 [2024-11-19 01:58:02.029810] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.421 [2024-11-19 01:58:02.029814] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.421 [2024-11-19 01:58:02.029820] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d40e40) on tqpair=0x1d079f0 00:16:51.421 [2024-11-19 01:58:02.029843] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.421 [2024-11-19 01:58:02.029856] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.421 [2024-11-19 01:58:02.029890] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.421 [2024-11-19 01:58:02.029910] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d40cc0) on tqpair=0x1d079f0 00:16:51.421 [2024-11-19 01:58:02.029940] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.421 [2024-11-19 01:58:02.029950] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d079f0) 00:16:51.421 [2024-11-19 01:58:02.029964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.421 [2024-11-19 01:58:02.030013] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d40cc0, cid 4, qid 0 00:16:51.421 [2024-11-19 01:58:02.030086] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:51.421 [2024-11-19 01:58:02.030100] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:51.421 [2024-11-19 01:58:02.030109] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:51.421 [2024-11-19 01:58:02.030117] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d079f0): datao=0, datal=3072, cccid=4 00:16:51.421 [2024-11-19 01:58:02.030138] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d40cc0) on tqpair(0x1d079f0): expected_datao=0, payload_size=3072 00:16:51.421 [2024-11-19 01:58:02.030147] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.421 [2024-11-19 01:58:02.030161] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:51.421 [2024-11-19 01:58:02.030171] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:51.421 [2024-11-19 01:58:02.030196] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.421 [2024-11-19 01:58:02.030216] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.421 [2024-11-19 01:58:02.030221] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.421 [2024-11-19 01:58:02.030226] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d40cc0) on tqpair=0x1d079f0 00:16:51.421 [2024-11-19 01:58:02.030241] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.421 [2024-11-19 01:58:02.030247] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d079f0) 00:16:51.421 [2024-11-19 01:58:02.030258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.421 [2024-11-19 01:58:02.030298] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d40cc0, cid 4, qid 0 00:16:51.421 ===================================================== 00:16:51.421 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:51.421 ===================================================== 00:16:51.421 Controller Capabilities/Features 00:16:51.421 ================================ 00:16:51.421 Vendor ID: 0000 00:16:51.421 Subsystem Vendor ID: 0000 00:16:51.421 Serial Number: .................... 00:16:51.421 Model Number: ........................................ 00:16:51.421 Firmware Version: 25.01 00:16:51.421 Recommended Arb Burst: 0 00:16:51.421 IEEE OUI Identifier: 00 00 00 00:16:51.421 Multi-path I/O 00:16:51.421 May have multiple subsystem ports: No 00:16:51.421 May have multiple controllers: No 00:16:51.421 Associated with SR-IOV VF: No 00:16:51.421 Max Data Transfer Size: 131072 00:16:51.421 Max Number of Namespaces: 0 00:16:51.421 Max Number of I/O Queues: 1024 00:16:51.421 NVMe Specification Version (VS): 1.3 00:16:51.421 NVMe Specification Version (Identify): 1.3 00:16:51.421 Maximum Queue Entries: 128 00:16:51.421 Contiguous Queues Required: Yes 00:16:51.421 Arbitration Mechanisms Supported 00:16:51.421 Weighted Round Robin: Not Supported 00:16:51.421 Vendor Specific: Not Supported 00:16:51.421 Reset Timeout: 15000 ms 00:16:51.421 Doorbell Stride: 4 bytes 00:16:51.421 NVM Subsystem Reset: Not Supported 00:16:51.421 Command Sets Supported 00:16:51.421 NVM Command Set: Supported 00:16:51.421 Boot Partition: Not Supported 00:16:51.421 Memory Page Size Minimum: 4096 bytes 00:16:51.421 Memory Page Size Maximum: 4096 bytes 00:16:51.421 Persistent Memory Region: Not Supported 00:16:51.421 Optional Asynchronous Events Supported 00:16:51.421 Namespace Attribute Notices: Not Supported 00:16:51.421 Firmware Activation Notices: Not Supported 00:16:51.421 ANA Change Notices: Not Supported 00:16:51.421 PLE Aggregate Log Change Notices: Not Supported 00:16:51.421 LBA Status Info Alert Notices: Not Supported 00:16:51.421 EGE Aggregate Log Change Notices: Not Supported 00:16:51.421 Normal NVM Subsystem Shutdown event: Not Supported 00:16:51.421 Zone Descriptor Change Notices: Not Supported 00:16:51.421 Discovery Log Change Notices: Supported 00:16:51.421 Controller Attributes 00:16:51.421 128-bit Host Identifier: Not Supported 00:16:51.421 Non-Operational Permissive Mode: Not Supported 00:16:51.421 NVM Sets: Not Supported 00:16:51.421 Read Recovery Levels: Not Supported 00:16:51.422 Endurance Groups: Not Supported 00:16:51.422 Predictable Latency Mode: Not Supported 00:16:51.422 Traffic Based Keep ALive: Not Supported 00:16:51.422 Namespace Granularity: Not Supported 00:16:51.422 SQ Associations: Not Supported 00:16:51.422 UUID List: Not Supported 00:16:51.422 Multi-Domain Subsystem: Not Supported 00:16:51.422 Fixed Capacity Management: Not Supported 00:16:51.422 Variable Capacity Management: Not Supported 00:16:51.422 Delete Endurance Group: Not Supported 00:16:51.422 Delete NVM Set: Not Supported 00:16:51.422 Extended LBA Formats Supported: Not Supported 00:16:51.422 Flexible Data Placement Supported: Not Supported 00:16:51.422 00:16:51.422 Controller Memory Buffer Support 00:16:51.422 ================================ 00:16:51.422 Supported: No 00:16:51.422 00:16:51.422 Persistent Memory Region Support 00:16:51.422 ================================ 00:16:51.422 Supported: No 00:16:51.422 00:16:51.422 Admin Command Set Attributes 00:16:51.422 ============================ 00:16:51.422 Security Send/Receive: Not Supported 00:16:51.422 Format NVM: Not Supported 00:16:51.422 Firmware Activate/Download: Not Supported 00:16:51.422 Namespace Management: Not Supported 00:16:51.422 Device Self-Test: Not Supported 00:16:51.422 Directives: Not Supported 00:16:51.422 NVMe-MI: Not Supported 00:16:51.422 Virtualization Management: Not Supported 00:16:51.422 Doorbell Buffer Config: Not Supported 00:16:51.422 Get LBA Status Capability: Not Supported 00:16:51.422 Command & Feature Lockdown Capability: Not Supported 00:16:51.422 Abort Command Limit: 1 00:16:51.422 Async Event Request Limit: 4 00:16:51.422 Number of Firmware Slots: N/A 00:16:51.422 Firmware Slot 1 Read-Only: N/A 00:16:51.422 Firm[2024-11-19 01:58:02.030366] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:51.422 [2024-11-19 01:58:02.030375] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:51.422 [2024-11-19 01:58:02.030380] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:51.422 [2024-11-19 01:58:02.030385] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d079f0): datao=0, datal=8, cccid=4 00:16:51.422 [2024-11-19 01:58:02.030391] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d40cc0) on tqpair(0x1d079f0): expected_datao=0, payload_size=8 00:16:51.422 [2024-11-19 01:58:02.030397] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.422 [2024-11-19 01:58:02.030406] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:51.422 [2024-11-19 01:58:02.030411] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:51.422 [2024-11-19 01:58:02.030430] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.422 [2024-11-19 01:58:02.030439] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.422 [2024-11-19 01:58:02.030444] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.422 [2024-11-19 01:58:02.030449] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d40cc0) on tqpair=0x1d079f0 00:16:51.422 ware Activation Without Reset: N/A 00:16:51.422 Multiple Update Detection Support: N/A 00:16:51.422 Firmware Update Granularity: No Information Provided 00:16:51.422 Per-Namespace SMART Log: No 00:16:51.422 Asymmetric Namespace Access Log Page: Not Supported 00:16:51.422 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:51.422 Command Effects Log Page: Not Supported 00:16:51.422 Get Log Page Extended Data: Supported 00:16:51.422 Telemetry Log Pages: Not Supported 00:16:51.422 Persistent Event Log Pages: Not Supported 00:16:51.422 Supported Log Pages Log Page: May Support 00:16:51.422 Commands Supported & Effects Log Page: Not Supported 00:16:51.422 Feature Identifiers & Effects Log Page:May Support 00:16:51.422 NVMe-MI Commands & Effects Log Page: May Support 00:16:51.422 Data Area 4 for Telemetry Log: Not Supported 00:16:51.422 Error Log Page Entries Supported: 128 00:16:51.422 Keep Alive: Not Supported 00:16:51.422 00:16:51.422 NVM Command Set Attributes 00:16:51.422 ========================== 00:16:51.422 Submission Queue Entry Size 00:16:51.422 Max: 1 00:16:51.422 Min: 1 00:16:51.422 Completion Queue Entry Size 00:16:51.422 Max: 1 00:16:51.422 Min: 1 00:16:51.422 Number of Namespaces: 0 00:16:51.422 Compare Command: Not Supported 00:16:51.422 Write Uncorrectable Command: Not Supported 00:16:51.422 Dataset Management Command: Not Supported 00:16:51.422 Write Zeroes Command: Not Supported 00:16:51.422 Set Features Save Field: Not Supported 00:16:51.422 Reservations: Not Supported 00:16:51.422 Timestamp: Not Supported 00:16:51.422 Copy: Not Supported 00:16:51.422 Volatile Write Cache: Not Present 00:16:51.422 Atomic Write Unit (Normal): 1 00:16:51.422 Atomic Write Unit (PFail): 1 00:16:51.422 Atomic Compare & Write Unit: 1 00:16:51.422 Fused Compare & Write: Supported 00:16:51.422 Scatter-Gather List 00:16:51.422 SGL Command Set: Supported 00:16:51.422 SGL Keyed: Supported 00:16:51.422 SGL Bit Bucket Descriptor: Not Supported 00:16:51.422 SGL Metadata Pointer: Not Supported 00:16:51.422 Oversized SGL: Not Supported 00:16:51.422 SGL Metadata Address: Not Supported 00:16:51.422 SGL Offset: Supported 00:16:51.422 Transport SGL Data Block: Not Supported 00:16:51.422 Replay Protected Memory Block: Not Supported 00:16:51.422 00:16:51.422 Firmware Slot Information 00:16:51.422 ========================= 00:16:51.422 Active slot: 0 00:16:51.422 00:16:51.422 00:16:51.422 Error Log 00:16:51.422 ========= 00:16:51.422 00:16:51.422 Active Namespaces 00:16:51.422 ================= 00:16:51.422 Discovery Log Page 00:16:51.422 ================== 00:16:51.422 Generation Counter: 2 00:16:51.422 Number of Records: 2 00:16:51.422 Record Format: 0 00:16:51.422 00:16:51.422 Discovery Log Entry 0 00:16:51.422 ---------------------- 00:16:51.422 Transport Type: 3 (TCP) 00:16:51.422 Address Family: 1 (IPv4) 00:16:51.422 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:51.422 Entry Flags: 00:16:51.422 Duplicate Returned Information: 1 00:16:51.422 Explicit Persistent Connection Support for Discovery: 1 00:16:51.422 Transport Requirements: 00:16:51.422 Secure Channel: Not Required 00:16:51.422 Port ID: 0 (0x0000) 00:16:51.422 Controller ID: 65535 (0xffff) 00:16:51.422 Admin Max SQ Size: 128 00:16:51.422 Transport Service Identifier: 4420 00:16:51.422 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:51.422 Transport Address: 10.0.0.3 00:16:51.422 Discovery Log Entry 1 00:16:51.422 ---------------------- 00:16:51.422 Transport Type: 3 (TCP) 00:16:51.422 Address Family: 1 (IPv4) 00:16:51.422 Subsystem Type: 2 (NVM Subsystem) 00:16:51.422 Entry Flags: 00:16:51.422 Duplicate Returned Information: 0 00:16:51.422 Explicit Persistent Connection Support for Discovery: 0 00:16:51.422 Transport Requirements: 00:16:51.422 Secure Channel: Not Required 00:16:51.422 Port ID: 0 (0x0000) 00:16:51.422 Controller ID: 65535 (0xffff) 00:16:51.422 Admin Max SQ Size: 128 00:16:51.422 Transport Service Identifier: 4420 00:16:51.422 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:16:51.422 Transport Address: 10.0.0.3 [2024-11-19 01:58:02.030595] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:16:51.422 [2024-11-19 01:58:02.030614] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d406c0) on tqpair=0x1d079f0 00:16:51.422 [2024-11-19 01:58:02.030623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.422 [2024-11-19 01:58:02.030630] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d40840) on tqpair=0x1d079f0 00:16:51.422 [2024-11-19 01:58:02.030636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.422 [2024-11-19 01:58:02.030643] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d409c0) on tqpair=0x1d079f0 00:16:51.422 [2024-11-19 01:58:02.030649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.422 [2024-11-19 01:58:02.030655] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d40b40) on tqpair=0x1d079f0 00:16:51.422 [2024-11-19 01:58:02.030662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.422 [2024-11-19 01:58:02.030673] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.422 [2024-11-19 01:58:02.030679] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.422 [2024-11-19 01:58:02.030684] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d079f0) 00:16:51.422 [2024-11-19 01:58:02.030694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.422 [2024-11-19 01:58:02.030723] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d40b40, cid 3, qid 0 00:16:51.422 [2024-11-19 01:58:02.030773] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.422 [2024-11-19 01:58:02.030782] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.422 [2024-11-19 01:58:02.030787] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.423 [2024-11-19 01:58:02.030792] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d40b40) on tqpair=0x1d079f0 00:16:51.423 [2024-11-19 01:58:02.030802] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.423 [2024-11-19 01:58:02.030808] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.423 [2024-11-19 01:58:02.030813] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d079f0) 00:16:51.423 [2024-11-19 01:58:02.030831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.423 [2024-11-19 01:58:02.030857] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d40b40, cid 3, qid 0 00:16:51.423 [2024-11-19 01:58:02.030932] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.423 [2024-11-19 01:58:02.030941] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.423 [2024-11-19 01:58:02.030945] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.423 [2024-11-19 01:58:02.030951] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d40b40) on tqpair=0x1d079f0 00:16:51.423 [2024-11-19 01:58:02.030957] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:16:51.423 [2024-11-19 01:58:02.030964] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:16:51.423 [2024-11-19 01:58:02.030976] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.423 [2024-11-19 01:58:02.030982] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.423 [2024-11-19 01:58:02.030987] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d079f0) 00:16:51.423 [2024-11-19 01:58:02.030996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.423 [2024-11-19 01:58:02.031017] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d40b40, cid 3, qid 0 00:16:51.423 [2024-11-19 01:58:02.031064] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.423 [2024-11-19 01:58:02.031073] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.423 [2024-11-19 01:58:02.031078] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.423 [2024-11-19 01:58:02.031083] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d40b40) on tqpair=0x1d079f0 00:16:51.423 [2024-11-19 01:58:02.031096] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.423 [2024-11-19 01:58:02.031102] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.423 [2024-11-19 01:58:02.031107] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d079f0) 00:16:51.423 [2024-11-19 01:58:02.031116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.423 [2024-11-19 01:58:02.031136] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d40b40, cid 3, qid 0 00:16:51.423 [2024-11-19 01:58:02.031180] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.423 [2024-11-19 01:58:02.031188] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.423 [2024-11-19 01:58:02.031193] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.423 [2024-11-19 01:58:02.031198] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d40b40) on tqpair=0x1d079f0 00:16:51.423 [2024-11-19 01:58:02.031211] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.423 [2024-11-19 01:58:02.031217] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.423 [2024-11-19 01:58:02.031221] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d079f0) 00:16:51.423 [2024-11-19 01:58:02.031230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.423 [2024-11-19 01:58:02.031250] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d40b40, cid 3, qid 0 00:16:51.423 [2024-11-19 01:58:02.031294] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.423 [2024-11-19 01:58:02.031302] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.423 [2024-11-19 01:58:02.031307] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.423 [2024-11-19 01:58:02.031312] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d40b40) on tqpair=0x1d079f0 00:16:51.423 [2024-11-19 01:58:02.031325] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.423 [2024-11-19 01:58:02.031331] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.423 [2024-11-19 01:58:02.031336] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d079f0) 00:16:51.423 [2024-11-19 01:58:02.031345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.423 [2024-11-19 01:58:02.031365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d40b40, cid 3, qid 0 00:16:51.423 [2024-11-19 01:58:02.031412] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.423 [2024-11-19 01:58:02.031420] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.423 [2024-11-19 01:58:02.031425] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.423 [2024-11-19 01:58:02.031430] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d40b40) on tqpair=0x1d079f0 00:16:51.423 [2024-11-19 01:58:02.031443] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.423 [2024-11-19 01:58:02.031449] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.423 [2024-11-19 01:58:02.031454] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d079f0) 00:16:51.423 [2024-11-19 01:58:02.031463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.423 [2024-11-19 01:58:02.031484] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d40b40, cid 3, qid 0 00:16:51.687 [2024-11-19 01:58:02.035529] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.687 [2024-11-19 01:58:02.035550] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.687 [2024-11-19 01:58:02.035557] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.687 [2024-11-19 01:58:02.035563] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d40b40) on tqpair=0x1d079f0 00:16:51.687 [2024-11-19 01:58:02.035581] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.687 [2024-11-19 01:58:02.035588] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.687 [2024-11-19 01:58:02.035593] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d079f0) 00:16:51.687 [2024-11-19 01:58:02.035605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.687 [2024-11-19 01:58:02.035639] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d40b40, cid 3, qid 0 00:16:51.687 [2024-11-19 01:58:02.035687] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.687 [2024-11-19 01:58:02.035696] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.687 [2024-11-19 01:58:02.035701] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.687 [2024-11-19 01:58:02.035706] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d40b40) on tqpair=0x1d079f0 00:16:51.687 [2024-11-19 01:58:02.035717] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:16:51.687 00:16:51.687 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:16:51.687 [2024-11-19 01:58:02.080038] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:16:51.687 [2024-11-19 01:58:02.080087] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88152 ] 00:16:51.687 [2024-11-19 01:58:02.233741] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:16:51.687 [2024-11-19 01:58:02.233811] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:51.687 [2024-11-19 01:58:02.233818] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:51.687 [2024-11-19 01:58:02.233829] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:51.687 [2024-11-19 01:58:02.233838] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:51.687 [2024-11-19 01:58:02.234178] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:16:51.687 [2024-11-19 01:58:02.234264] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xd819f0 0 00:16:51.687 [2024-11-19 01:58:02.246598] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:51.687 [2024-11-19 01:58:02.246622] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:51.687 [2024-11-19 01:58:02.246645] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:51.687 [2024-11-19 01:58:02.246649] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:51.687 [2024-11-19 01:58:02.246677] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.687 [2024-11-19 01:58:02.246685] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.687 [2024-11-19 01:58:02.246689] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd819f0) 00:16:51.688 [2024-11-19 01:58:02.246702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:51.688 [2024-11-19 01:58:02.246733] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdba6c0, cid 0, qid 0 00:16:51.688 [2024-11-19 01:58:02.254616] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.688 [2024-11-19 01:58:02.254642] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.688 [2024-11-19 01:58:02.254648] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.688 [2024-11-19 01:58:02.254652] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdba6c0) on tqpair=0xd819f0 00:16:51.688 [2024-11-19 01:58:02.254664] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:51.688 [2024-11-19 01:58:02.254672] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:16:51.688 [2024-11-19 01:58:02.254679] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:16:51.688 [2024-11-19 01:58:02.254695] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.688 [2024-11-19 01:58:02.254700] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.688 [2024-11-19 01:58:02.254704] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd819f0) 00:16:51.688 [2024-11-19 01:58:02.254713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.688 [2024-11-19 01:58:02.254741] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdba6c0, cid 0, qid 0 00:16:51.688 [2024-11-19 01:58:02.254803] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.688 [2024-11-19 01:58:02.254810] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.688 [2024-11-19 01:58:02.254813] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.688 [2024-11-19 01:58:02.254817] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdba6c0) on tqpair=0xd819f0 00:16:51.688 [2024-11-19 01:58:02.254840] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:16:51.688 [2024-11-19 01:58:02.254848] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:16:51.688 [2024-11-19 01:58:02.254856] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.688 [2024-11-19 01:58:02.254861] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.688 [2024-11-19 01:58:02.254865] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd819f0) 00:16:51.688 [2024-11-19 01:58:02.254872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.688 [2024-11-19 01:58:02.254892] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdba6c0, cid 0, qid 0 00:16:51.688 [2024-11-19 01:58:02.254940] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.688 [2024-11-19 01:58:02.254947] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.688 [2024-11-19 01:58:02.254951] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.688 [2024-11-19 01:58:02.254955] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdba6c0) on tqpair=0xd819f0 00:16:51.688 [2024-11-19 01:58:02.254961] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:16:51.688 [2024-11-19 01:58:02.254969] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:16:51.688 [2024-11-19 01:58:02.254977] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.688 [2024-11-19 01:58:02.254981] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.688 [2024-11-19 01:58:02.254985] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd819f0) 00:16:51.688 [2024-11-19 01:58:02.254993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.688 [2024-11-19 01:58:02.255011] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdba6c0, cid 0, qid 0 00:16:51.688 [2024-11-19 01:58:02.255056] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.688 [2024-11-19 01:58:02.255063] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.688 [2024-11-19 01:58:02.255066] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.688 [2024-11-19 01:58:02.255070] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdba6c0) on tqpair=0xd819f0 00:16:51.688 [2024-11-19 01:58:02.255077] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:51.688 [2024-11-19 01:58:02.255087] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.688 [2024-11-19 01:58:02.255092] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.688 [2024-11-19 01:58:02.255096] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd819f0) 00:16:51.688 [2024-11-19 01:58:02.255103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.688 [2024-11-19 01:58:02.255121] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdba6c0, cid 0, qid 0 00:16:51.688 [2024-11-19 01:58:02.255171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.688 [2024-11-19 01:58:02.255178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.688 [2024-11-19 01:58:02.255182] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.688 [2024-11-19 01:58:02.255186] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdba6c0) on tqpair=0xd819f0 00:16:51.688 [2024-11-19 01:58:02.255191] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:16:51.688 [2024-11-19 01:58:02.255196] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:16:51.688 [2024-11-19 01:58:02.255204] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:51.688 [2024-11-19 01:58:02.255315] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:16:51.688 [2024-11-19 01:58:02.255320] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:51.688 [2024-11-19 01:58:02.255330] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.688 [2024-11-19 01:58:02.255334] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.688 [2024-11-19 01:58:02.255338] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd819f0) 00:16:51.688 [2024-11-19 01:58:02.255346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.688 [2024-11-19 01:58:02.255366] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdba6c0, cid 0, qid 0 00:16:51.688 [2024-11-19 01:58:02.255414] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.688 [2024-11-19 01:58:02.255421] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.688 [2024-11-19 01:58:02.255425] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.688 [2024-11-19 01:58:02.255429] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdba6c0) on tqpair=0xd819f0 00:16:51.688 [2024-11-19 01:58:02.255434] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:51.688 [2024-11-19 01:58:02.255444] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.688 [2024-11-19 01:58:02.255449] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.688 [2024-11-19 01:58:02.255453] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd819f0) 00:16:51.688 [2024-11-19 01:58:02.255460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.688 [2024-11-19 01:58:02.255479] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdba6c0, cid 0, qid 0 00:16:51.688 [2024-11-19 01:58:02.255526] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.688 [2024-11-19 01:58:02.255533] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.688 [2024-11-19 01:58:02.255537] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.688 [2024-11-19 01:58:02.255541] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdba6c0) on tqpair=0xd819f0 00:16:51.688 [2024-11-19 01:58:02.255566] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:51.688 [2024-11-19 01:58:02.255573] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:16:51.688 [2024-11-19 01:58:02.255583] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:16:51.688 [2024-11-19 01:58:02.255598] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:16:51.688 [2024-11-19 01:58:02.255608] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.688 [2024-11-19 01:58:02.255612] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd819f0) 00:16:51.688 [2024-11-19 01:58:02.255620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.688 [2024-11-19 01:58:02.255641] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdba6c0, cid 0, qid 0 00:16:51.688 [2024-11-19 01:58:02.255740] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:51.688 [2024-11-19 01:58:02.255747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:51.688 [2024-11-19 01:58:02.255751] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:51.688 [2024-11-19 01:58:02.255755] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd819f0): datao=0, datal=4096, cccid=0 00:16:51.688 [2024-11-19 01:58:02.255760] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdba6c0) on tqpair(0xd819f0): expected_datao=0, payload_size=4096 00:16:51.688 [2024-11-19 01:58:02.255765] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.688 [2024-11-19 01:58:02.255773] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:51.688 [2024-11-19 01:58:02.255778] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:51.688 [2024-11-19 01:58:02.255787] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.688 [2024-11-19 01:58:02.255793] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.688 [2024-11-19 01:58:02.255796] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.688 [2024-11-19 01:58:02.255801] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdba6c0) on tqpair=0xd819f0 00:16:51.688 [2024-11-19 01:58:02.255809] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:16:51.688 [2024-11-19 01:58:02.255815] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:16:51.688 [2024-11-19 01:58:02.255820] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:16:51.688 [2024-11-19 01:58:02.255825] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:16:51.688 [2024-11-19 01:58:02.255830] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:16:51.689 [2024-11-19 01:58:02.255835] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:16:51.689 [2024-11-19 01:58:02.255849] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:16:51.689 [2024-11-19 01:58:02.255857] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.689 [2024-11-19 01:58:02.255861] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.689 [2024-11-19 01:58:02.255865] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd819f0) 00:16:51.689 [2024-11-19 01:58:02.255873] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:51.689 [2024-11-19 01:58:02.255893] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdba6c0, cid 0, qid 0 00:16:51.689 [2024-11-19 01:58:02.255941] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.689 [2024-11-19 01:58:02.255948] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.689 [2024-11-19 01:58:02.255952] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.689 [2024-11-19 01:58:02.255956] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdba6c0) on tqpair=0xd819f0 00:16:51.689 [2024-11-19 01:58:02.255964] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.689 [2024-11-19 01:58:02.255968] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.689 [2024-11-19 01:58:02.255972] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd819f0) 00:16:51.689 [2024-11-19 01:58:02.255979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.689 [2024-11-19 01:58:02.255985] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.689 [2024-11-19 01:58:02.255990] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.689 [2024-11-19 01:58:02.255993] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xd819f0) 00:16:51.689 [2024-11-19 01:58:02.255999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.689 [2024-11-19 01:58:02.256005] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.689 [2024-11-19 01:58:02.256009] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.689 [2024-11-19 01:58:02.256013] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xd819f0) 00:16:51.689 [2024-11-19 01:58:02.256019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.689 [2024-11-19 01:58:02.256025] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.689 [2024-11-19 01:58:02.256029] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.689 [2024-11-19 01:58:02.256033] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.689 [2024-11-19 01:58:02.256039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.689 [2024-11-19 01:58:02.256044] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:51.689 [2024-11-19 01:58:02.256056] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:51.689 [2024-11-19 01:58:02.256064] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.689 [2024-11-19 01:58:02.256068] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd819f0) 00:16:51.689 [2024-11-19 01:58:02.256075] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.689 [2024-11-19 01:58:02.256096] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdba6c0, cid 0, qid 0 00:16:51.689 [2024-11-19 01:58:02.256103] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdba840, cid 1, qid 0 00:16:51.689 [2024-11-19 01:58:02.256109] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdba9c0, cid 2, qid 0 00:16:51.689 [2024-11-19 01:58:02.256114] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.689 [2024-11-19 01:58:02.256119] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbacc0, cid 4, qid 0 00:16:51.689 [2024-11-19 01:58:02.256198] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.689 [2024-11-19 01:58:02.256205] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.689 [2024-11-19 01:58:02.256209] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.689 [2024-11-19 01:58:02.256213] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbacc0) on tqpair=0xd819f0 00:16:51.689 [2024-11-19 01:58:02.256219] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:16:51.689 [2024-11-19 01:58:02.256224] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:51.689 [2024-11-19 01:58:02.256233] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:16:51.689 [2024-11-19 01:58:02.256243] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:51.689 [2024-11-19 01:58:02.256250] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.689 [2024-11-19 01:58:02.256255] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.689 [2024-11-19 01:58:02.256259] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd819f0) 00:16:51.689 [2024-11-19 01:58:02.256266] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:51.689 [2024-11-19 01:58:02.256284] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbacc0, cid 4, qid 0 00:16:51.689 [2024-11-19 01:58:02.256335] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.689 [2024-11-19 01:58:02.256342] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.689 [2024-11-19 01:58:02.256346] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.689 [2024-11-19 01:58:02.256350] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbacc0) on tqpair=0xd819f0 00:16:51.689 [2024-11-19 01:58:02.256413] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:16:51.689 [2024-11-19 01:58:02.256425] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:51.689 [2024-11-19 01:58:02.256434] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.689 [2024-11-19 01:58:02.256438] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd819f0) 00:16:51.689 [2024-11-19 01:58:02.256445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.689 [2024-11-19 01:58:02.256465] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbacc0, cid 4, qid 0 00:16:51.689 [2024-11-19 01:58:02.256542] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:51.689 [2024-11-19 01:58:02.256551] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:51.689 [2024-11-19 01:58:02.256555] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:51.689 [2024-11-19 01:58:02.256559] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd819f0): datao=0, datal=4096, cccid=4 00:16:51.689 [2024-11-19 01:58:02.256564] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdbacc0) on tqpair(0xd819f0): expected_datao=0, payload_size=4096 00:16:51.689 [2024-11-19 01:58:02.256568] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.689 [2024-11-19 01:58:02.256576] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:51.689 [2024-11-19 01:58:02.256580] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:51.689 [2024-11-19 01:58:02.256589] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.689 [2024-11-19 01:58:02.256595] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.689 [2024-11-19 01:58:02.256598] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.689 [2024-11-19 01:58:02.256602] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbacc0) on tqpair=0xd819f0 00:16:51.689 [2024-11-19 01:58:02.256617] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:16:51.689 [2024-11-19 01:58:02.256628] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:16:51.689 [2024-11-19 01:58:02.256638] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:16:51.689 [2024-11-19 01:58:02.256647] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.689 [2024-11-19 01:58:02.256651] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd819f0) 00:16:51.689 [2024-11-19 01:58:02.256658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.689 [2024-11-19 01:58:02.256679] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbacc0, cid 4, qid 0 00:16:51.689 [2024-11-19 01:58:02.256804] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:51.689 [2024-11-19 01:58:02.256811] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:51.689 [2024-11-19 01:58:02.256815] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:51.689 [2024-11-19 01:58:02.256819] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd819f0): datao=0, datal=4096, cccid=4 00:16:51.689 [2024-11-19 01:58:02.256823] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdbacc0) on tqpair(0xd819f0): expected_datao=0, payload_size=4096 00:16:51.689 [2024-11-19 01:58:02.256828] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.689 [2024-11-19 01:58:02.256835] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:51.689 [2024-11-19 01:58:02.256839] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:51.689 [2024-11-19 01:58:02.256847] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.689 [2024-11-19 01:58:02.256853] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.689 [2024-11-19 01:58:02.256857] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.690 [2024-11-19 01:58:02.256861] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbacc0) on tqpair=0xd819f0 00:16:51.690 [2024-11-19 01:58:02.256878] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:51.690 [2024-11-19 01:58:02.256889] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:51.690 [2024-11-19 01:58:02.256897] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.690 [2024-11-19 01:58:02.256902] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd819f0) 00:16:51.690 [2024-11-19 01:58:02.256909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.690 [2024-11-19 01:58:02.256928] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbacc0, cid 4, qid 0 00:16:51.690 [2024-11-19 01:58:02.256989] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:51.690 [2024-11-19 01:58:02.256995] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:51.690 [2024-11-19 01:58:02.256999] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:51.690 [2024-11-19 01:58:02.257003] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd819f0): datao=0, datal=4096, cccid=4 00:16:51.690 [2024-11-19 01:58:02.257008] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdbacc0) on tqpair(0xd819f0): expected_datao=0, payload_size=4096 00:16:51.690 [2024-11-19 01:58:02.257012] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.690 [2024-11-19 01:58:02.257019] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:51.690 [2024-11-19 01:58:02.257023] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:51.690 [2024-11-19 01:58:02.257032] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.690 [2024-11-19 01:58:02.257038] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.690 [2024-11-19 01:58:02.257042] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.690 [2024-11-19 01:58:02.257046] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbacc0) on tqpair=0xd819f0 00:16:51.690 [2024-11-19 01:58:02.257054] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:51.690 [2024-11-19 01:58:02.257063] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:16:51.690 [2024-11-19 01:58:02.257074] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:16:51.690 [2024-11-19 01:58:02.257081] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:16:51.690 [2024-11-19 01:58:02.257087] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:51.690 [2024-11-19 01:58:02.257092] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:16:51.690 [2024-11-19 01:58:02.257098] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:16:51.690 [2024-11-19 01:58:02.257102] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:16:51.690 [2024-11-19 01:58:02.257108] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:16:51.690 [2024-11-19 01:58:02.257124] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.690 [2024-11-19 01:58:02.257128] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd819f0) 00:16:51.690 [2024-11-19 01:58:02.257136] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.690 [2024-11-19 01:58:02.257143] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.690 [2024-11-19 01:58:02.257147] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.690 [2024-11-19 01:58:02.257151] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd819f0) 00:16:51.690 [2024-11-19 01:58:02.257157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.690 [2024-11-19 01:58:02.257181] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbacc0, cid 4, qid 0 00:16:51.690 [2024-11-19 01:58:02.257189] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbae40, cid 5, qid 0 00:16:51.690 [2024-11-19 01:58:02.257254] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.690 [2024-11-19 01:58:02.257261] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.690 [2024-11-19 01:58:02.257265] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.690 [2024-11-19 01:58:02.257269] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbacc0) on tqpair=0xd819f0 00:16:51.690 [2024-11-19 01:58:02.257276] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.690 [2024-11-19 01:58:02.257282] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.690 [2024-11-19 01:58:02.257285] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.690 [2024-11-19 01:58:02.257289] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbae40) on tqpair=0xd819f0 00:16:51.690 [2024-11-19 01:58:02.257300] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.690 [2024-11-19 01:58:02.257304] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd819f0) 00:16:51.690 [2024-11-19 01:58:02.257311] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.690 [2024-11-19 01:58:02.257329] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbae40, cid 5, qid 0 00:16:51.690 [2024-11-19 01:58:02.257376] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.690 [2024-11-19 01:58:02.257383] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.690 [2024-11-19 01:58:02.257386] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.690 [2024-11-19 01:58:02.257390] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbae40) on tqpair=0xd819f0 00:16:51.690 [2024-11-19 01:58:02.257401] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.690 [2024-11-19 01:58:02.257405] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd819f0) 00:16:51.690 [2024-11-19 01:58:02.257412] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.690 [2024-11-19 01:58:02.257429] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbae40, cid 5, qid 0 00:16:51.690 [2024-11-19 01:58:02.257491] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.690 [2024-11-19 01:58:02.257509] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.690 [2024-11-19 01:58:02.257514] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.690 [2024-11-19 01:58:02.257518] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbae40) on tqpair=0xd819f0 00:16:51.690 [2024-11-19 01:58:02.257530] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.690 [2024-11-19 01:58:02.257534] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd819f0) 00:16:51.690 [2024-11-19 01:58:02.257542] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.690 [2024-11-19 01:58:02.257560] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbae40, cid 5, qid 0 00:16:51.690 [2024-11-19 01:58:02.257610] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.690 [2024-11-19 01:58:02.257617] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.690 [2024-11-19 01:58:02.257621] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.690 [2024-11-19 01:58:02.257625] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbae40) on tqpair=0xd819f0 00:16:51.690 [2024-11-19 01:58:02.257643] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.690 [2024-11-19 01:58:02.257648] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd819f0) 00:16:51.690 [2024-11-19 01:58:02.257656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.690 [2024-11-19 01:58:02.257664] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.690 [2024-11-19 01:58:02.257668] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd819f0) 00:16:51.690 [2024-11-19 01:58:02.257674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.690 [2024-11-19 01:58:02.257681] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.690 [2024-11-19 01:58:02.257686] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xd819f0) 00:16:51.690 [2024-11-19 01:58:02.257692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.690 [2024-11-19 01:58:02.257700] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.690 [2024-11-19 01:58:02.257704] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xd819f0) 00:16:51.690 [2024-11-19 01:58:02.257710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.690 [2024-11-19 01:58:02.257730] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbae40, cid 5, qid 0 00:16:51.690 [2024-11-19 01:58:02.257738] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbacc0, cid 4, qid 0 00:16:51.690 [2024-11-19 01:58:02.257743] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbafc0, cid 6, qid 0 00:16:51.690 [2024-11-19 01:58:02.257747] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbb140, cid 7, qid 0 00:16:51.690 [2024-11-19 01:58:02.257905] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:51.690 [2024-11-19 01:58:02.257914] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:51.690 [2024-11-19 01:58:02.257917] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:51.690 [2024-11-19 01:58:02.257921] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd819f0): datao=0, datal=8192, cccid=5 00:16:51.690 [2024-11-19 01:58:02.257926] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdbae40) on tqpair(0xd819f0): expected_datao=0, payload_size=8192 00:16:51.690 [2024-11-19 01:58:02.257931] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.690 [2024-11-19 01:58:02.257948] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:51.690 [2024-11-19 01:58:02.257953] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:51.690 [2024-11-19 01:58:02.257959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:51.690 [2024-11-19 01:58:02.257965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:51.690 [2024-11-19 01:58:02.257969] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:51.691 [2024-11-19 01:58:02.257973] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd819f0): datao=0, datal=512, cccid=4 00:16:51.691 [2024-11-19 01:58:02.257978] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdbacc0) on tqpair(0xd819f0): expected_datao=0, payload_size=512 00:16:51.691 [2024-11-19 01:58:02.257982] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.691 [2024-11-19 01:58:02.257989] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:51.691 [2024-11-19 01:58:02.257993] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:51.691 [2024-11-19 01:58:02.257999] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:51.691 [2024-11-19 01:58:02.258005] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:51.691 [2024-11-19 01:58:02.258008] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:51.691 [2024-11-19 01:58:02.258012] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd819f0): datao=0, datal=512, cccid=6 00:16:51.691 [2024-11-19 01:58:02.258016] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdbafc0) on tqpair(0xd819f0): expected_datao=0, payload_size=512 00:16:51.691 [2024-11-19 01:58:02.258021] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.691 [2024-11-19 01:58:02.258027] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:51.691 [2024-11-19 01:58:02.258031] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:51.691 [2024-11-19 01:58:02.258037] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:51.691 [2024-11-19 01:58:02.258043] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:51.691 [2024-11-19 01:58:02.258046] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:51.691 [2024-11-19 01:58:02.258050] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd819f0): datao=0, datal=4096, cccid=7 00:16:51.691 [2024-11-19 01:58:02.258055] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xdbb140) on tqpair(0xd819f0): expected_datao=0, payload_size=4096 00:16:51.691 [2024-11-19 01:58:02.258059] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.691 [2024-11-19 01:58:02.258066] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:51.691 [2024-11-19 01:58:02.258070] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:51.691 [2024-11-19 01:58:02.258079] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.691 [2024-11-19 01:58:02.258085] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.691 [2024-11-19 01:58:02.258089] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.691 [2024-11-19 01:58:02.258093] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbae40) on tqpair=0xd819f0 00:16:51.691 ===================================================== 00:16:51.691 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:51.691 ===================================================== 00:16:51.691 Controller Capabilities/Features 00:16:51.691 ================================ 00:16:51.691 Vendor ID: 8086 00:16:51.691 Subsystem Vendor ID: 8086 00:16:51.691 Serial Number: SPDK00000000000001 00:16:51.691 Model Number: SPDK bdev Controller 00:16:51.691 Firmware Version: 25.01 00:16:51.691 Recommended Arb Burst: 6 00:16:51.691 IEEE OUI Identifier: e4 d2 5c 00:16:51.691 Multi-path I/O 00:16:51.691 May have multiple subsystem ports: Yes 00:16:51.691 May have multiple controllers: Yes 00:16:51.691 Associated with SR-IOV VF: No 00:16:51.691 Max Data Transfer Size: 131072 00:16:51.691 Max Number of Namespaces: 32 00:16:51.691 Max Number of I/O Queues: 127 00:16:51.691 NVMe Specification Version (VS): 1.3 00:16:51.691 NVMe Specification Version (Identify): 1.3 00:16:51.691 Maximum Queue Entries: 128 00:16:51.691 Contiguous Queues Required: Yes 00:16:51.691 Arbitration Mechanisms Supported 00:16:51.691 Weighted Round Robin: Not Supported 00:16:51.691 Vendor Specific: Not Supported 00:16:51.691 Reset Timeout: 15000 ms 00:16:51.691 Doorbell Stride: 4 bytes 00:16:51.691 NVM Subsystem Reset: Not Supported 00:16:51.691 Command Sets Supported 00:16:51.691 NVM Command Set: Supported 00:16:51.691 Boot Partition: Not Supported 00:16:51.691 Memory Page Size Minimum: 4096 bytes 00:16:51.691 Memory Page Size Maximum: 4096 bytes 00:16:51.691 Persistent Memory Region: Not Supported 00:16:51.691 Optional Asynchronous Events Supported 00:16:51.691 Namespace Attribute Notices: Supported 00:16:51.691 Firmware Activation Notices: Not Supported 00:16:51.691 ANA Change Notices: Not Supported 00:16:51.691 PLE Aggregate Log Change Notices: Not Supported 00:16:51.691 LBA Status Info Alert Notices: Not Supported 00:16:51.691 EGE Aggregate Log Change Notices: Not Supported 00:16:51.691 Normal NVM Subsystem Shutdown event: Not Supported 00:16:51.691 Zone Descriptor Change Notices: Not Supported 00:16:51.691 Discovery Log Change Notices: Not Supported 00:16:51.691 Controller Attributes 00:16:51.691 128-bit Host Identifier: Supported 00:16:51.691 Non-Operational Permissive Mode: Not Supported 00:16:51.691 NVM Sets: Not Supported 00:16:51.691 Read Recovery Levels: Not Supported 00:16:51.691 Endurance Groups: Not Supported 00:16:51.691 Predictable Latency Mode: Not Supported 00:16:51.691 Traffic Based Keep ALive: Not Supported 00:16:51.691 Namespace Granularity: Not Supported 00:16:51.691 SQ Associations: Not Supported 00:16:51.691 UUID List: Not Supported 00:16:51.691 Multi-Domain Subsystem: Not Supported 00:16:51.691 Fixed Capacity Management: Not Supported 00:16:51.691 Variable Capacity Management: Not Supported 00:16:51.691 Delete Endurance Group: Not Supported 00:16:51.691 Delete NVM Set: Not Supported 00:16:51.691 Extended LBA Formats Supported: Not Supported 00:16:51.691 Flexible Data Placement Supported: Not Supported 00:16:51.691 00:16:51.691 Controller Memory Buffer Support 00:16:51.691 ================================ 00:16:51.691 Supported: No 00:16:51.691 00:16:51.691 Persistent Memory Region Support 00:16:51.691 ================================ 00:16:51.691 Supported: No 00:16:51.691 00:16:51.691 Admin Command Set Attributes 00:16:51.691 ============================ 00:16:51.691 Security Send/Receive: Not Supported 00:16:51.691 Format NVM: Not Supported 00:16:51.691 Firmware Activate/Download: Not Supported 00:16:51.691 Namespace Management: Not Supported 00:16:51.691 Device Self-Test: Not Supported 00:16:51.691 Directives: Not Supported 00:16:51.691 NVMe-MI: Not Supported 00:16:51.691 Virtualization Management: Not Supported 00:16:51.691 Doorbell Buffer Config: Not Supported 00:16:51.691 Get LBA Status Capability: Not Supported 00:16:51.691 Command & Feature Lockdown Capability: Not Supported 00:16:51.691 Abort Command Limit: 4 00:16:51.691 Async Event Request Limit: 4 00:16:51.691 Number of Firmware Slots: N/A 00:16:51.691 Firmware Slot 1 Read-Only: N/A 00:16:51.691 Firmware Activation Without Reset: [2024-11-19 01:58:02.258108] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.691 [2024-11-19 01:58:02.258115] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.691 [2024-11-19 01:58:02.258119] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.691 [2024-11-19 01:58:02.258123] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbacc0) on tqpair=0xd819f0 00:16:51.691 [2024-11-19 01:58:02.258135] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.691 [2024-11-19 01:58:02.258141] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.691 [2024-11-19 01:58:02.258145] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.691 [2024-11-19 01:58:02.258149] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbafc0) on tqpair=0xd819f0 00:16:51.691 [2024-11-19 01:58:02.258156] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.691 [2024-11-19 01:58:02.258162] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.691 [2024-11-19 01:58:02.258166] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.691 [2024-11-19 01:58:02.258170] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbb140) on tqpair=0xd819f0 00:16:51.691 N/A 00:16:51.691 Multiple Update Detection Support: N/A 00:16:51.691 Firmware Update Granularity: No Information Provided 00:16:51.691 Per-Namespace SMART Log: No 00:16:51.691 Asymmetric Namespace Access Log Page: Not Supported 00:16:51.691 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:16:51.691 Command Effects Log Page: Supported 00:16:51.691 Get Log Page Extended Data: Supported 00:16:51.691 Telemetry Log Pages: Not Supported 00:16:51.691 Persistent Event Log Pages: Not Supported 00:16:51.691 Supported Log Pages Log Page: May Support 00:16:51.691 Commands Supported & Effects Log Page: Not Supported 00:16:51.691 Feature Identifiers & Effects Log Page:May Support 00:16:51.691 NVMe-MI Commands & Effects Log Page: May Support 00:16:51.691 Data Area 4 for Telemetry Log: Not Supported 00:16:51.691 Error Log Page Entries Supported: 128 00:16:51.691 Keep Alive: Supported 00:16:51.691 Keep Alive Granularity: 10000 ms 00:16:51.691 00:16:51.691 NVM Command Set Attributes 00:16:51.691 ========================== 00:16:51.691 Submission Queue Entry Size 00:16:51.691 Max: 64 00:16:51.691 Min: 64 00:16:51.691 Completion Queue Entry Size 00:16:51.691 Max: 16 00:16:51.691 Min: 16 00:16:51.691 Number of Namespaces: 32 00:16:51.691 Compare Command: Supported 00:16:51.691 Write Uncorrectable Command: Not Supported 00:16:51.691 Dataset Management Command: Supported 00:16:51.691 Write Zeroes Command: Supported 00:16:51.691 Set Features Save Field: Not Supported 00:16:51.691 Reservations: Supported 00:16:51.691 Timestamp: Not Supported 00:16:51.691 Copy: Supported 00:16:51.691 Volatile Write Cache: Present 00:16:51.692 Atomic Write Unit (Normal): 1 00:16:51.692 Atomic Write Unit (PFail): 1 00:16:51.692 Atomic Compare & Write Unit: 1 00:16:51.692 Fused Compare & Write: Supported 00:16:51.692 Scatter-Gather List 00:16:51.692 SGL Command Set: Supported 00:16:51.692 SGL Keyed: Supported 00:16:51.692 SGL Bit Bucket Descriptor: Not Supported 00:16:51.692 SGL Metadata Pointer: Not Supported 00:16:51.692 Oversized SGL: Not Supported 00:16:51.692 SGL Metadata Address: Not Supported 00:16:51.692 SGL Offset: Supported 00:16:51.692 Transport SGL Data Block: Not Supported 00:16:51.692 Replay Protected Memory Block: Not Supported 00:16:51.692 00:16:51.692 Firmware Slot Information 00:16:51.692 ========================= 00:16:51.692 Active slot: 1 00:16:51.692 Slot 1 Firmware Revision: 25.01 00:16:51.692 00:16:51.692 00:16:51.692 Commands Supported and Effects 00:16:51.692 ============================== 00:16:51.692 Admin Commands 00:16:51.692 -------------- 00:16:51.692 Get Log Page (02h): Supported 00:16:51.692 Identify (06h): Supported 00:16:51.692 Abort (08h): Supported 00:16:51.692 Set Features (09h): Supported 00:16:51.692 Get Features (0Ah): Supported 00:16:51.692 Asynchronous Event Request (0Ch): Supported 00:16:51.692 Keep Alive (18h): Supported 00:16:51.692 I/O Commands 00:16:51.692 ------------ 00:16:51.692 Flush (00h): Supported LBA-Change 00:16:51.692 Write (01h): Supported LBA-Change 00:16:51.692 Read (02h): Supported 00:16:51.692 Compare (05h): Supported 00:16:51.692 Write Zeroes (08h): Supported LBA-Change 00:16:51.692 Dataset Management (09h): Supported LBA-Change 00:16:51.692 Copy (19h): Supported LBA-Change 00:16:51.692 00:16:51.692 Error Log 00:16:51.692 ========= 00:16:51.692 00:16:51.692 Arbitration 00:16:51.692 =========== 00:16:51.692 Arbitration Burst: 1 00:16:51.692 00:16:51.692 Power Management 00:16:51.692 ================ 00:16:51.692 Number of Power States: 1 00:16:51.692 Current Power State: Power State #0 00:16:51.692 Power State #0: 00:16:51.692 Max Power: 0.00 W 00:16:51.692 Non-Operational State: Operational 00:16:51.692 Entry Latency: Not Reported 00:16:51.692 Exit Latency: Not Reported 00:16:51.692 Relative Read Throughput: 0 00:16:51.692 Relative Read Latency: 0 00:16:51.692 Relative Write Throughput: 0 00:16:51.692 Relative Write Latency: 0 00:16:51.692 Idle Power: Not Reported 00:16:51.692 Active Power: Not Reported 00:16:51.692 Non-Operational Permissive Mode: Not Supported 00:16:51.692 00:16:51.692 Health Information 00:16:51.692 ================== 00:16:51.692 Critical Warnings: 00:16:51.692 Available Spare Space: OK 00:16:51.692 Temperature: OK 00:16:51.692 Device Reliability: OK 00:16:51.692 Read Only: No 00:16:51.692 Volatile Memory Backup: OK 00:16:51.692 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:51.692 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:51.692 Available Spare: 0% 00:16:51.692 Available Spare Threshold: 0% 00:16:51.692 Life Percentage Used:[2024-11-19 01:58:02.258293] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.692 [2024-11-19 01:58:02.258300] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xd819f0) 00:16:51.692 [2024-11-19 01:58:02.258308] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.692 [2024-11-19 01:58:02.258330] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbb140, cid 7, qid 0 00:16:51.692 [2024-11-19 01:58:02.258377] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.692 [2024-11-19 01:58:02.258384] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.692 [2024-11-19 01:58:02.258388] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.692 [2024-11-19 01:58:02.258392] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbb140) on tqpair=0xd819f0 00:16:51.692 [2024-11-19 01:58:02.258428] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:16:51.692 [2024-11-19 01:58:02.258439] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdba6c0) on tqpair=0xd819f0 00:16:51.692 [2024-11-19 01:58:02.258446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.692 [2024-11-19 01:58:02.258452] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdba840) on tqpair=0xd819f0 00:16:51.692 [2024-11-19 01:58:02.258457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.692 [2024-11-19 01:58:02.258462] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdba9c0) on tqpair=0xd819f0 00:16:51.692 [2024-11-19 01:58:02.258467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.692 [2024-11-19 01:58:02.258472] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.692 [2024-11-19 01:58:02.258477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.692 [2024-11-19 01:58:02.258486] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.692 [2024-11-19 01:58:02.258490] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.692 [2024-11-19 01:58:02.258494] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.692 [2024-11-19 01:58:02.258501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.692 [2024-11-19 01:58:02.258524] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.692 [2024-11-19 01:58:02.262597] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.692 [2024-11-19 01:58:02.262608] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.692 [2024-11-19 01:58:02.262612] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.692 [2024-11-19 01:58:02.262616] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.692 [2024-11-19 01:58:02.262626] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.692 [2024-11-19 01:58:02.262631] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.692 [2024-11-19 01:58:02.262635] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.692 [2024-11-19 01:58:02.262644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.692 [2024-11-19 01:58:02.262672] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.692 [2024-11-19 01:58:02.262751] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.692 [2024-11-19 01:58:02.262758] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.692 [2024-11-19 01:58:02.262761] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.692 [2024-11-19 01:58:02.262766] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.692 [2024-11-19 01:58:02.262771] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:16:51.692 [2024-11-19 01:58:02.262776] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:16:51.692 [2024-11-19 01:58:02.262786] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.692 [2024-11-19 01:58:02.262791] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.692 [2024-11-19 01:58:02.262795] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.692 [2024-11-19 01:58:02.262802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.692 [2024-11-19 01:58:02.262821] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.692 [2024-11-19 01:58:02.262865] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.692 [2024-11-19 01:58:02.262872] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.692 [2024-11-19 01:58:02.262876] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.692 [2024-11-19 01:58:02.262880] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.692 [2024-11-19 01:58:02.262891] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.693 [2024-11-19 01:58:02.262896] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.693 [2024-11-19 01:58:02.262900] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.693 [2024-11-19 01:58:02.262907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.693 [2024-11-19 01:58:02.262924] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.693 [2024-11-19 01:58:02.262972] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.693 [2024-11-19 01:58:02.262979] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.693 [2024-11-19 01:58:02.262982] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.693 [2024-11-19 01:58:02.262986] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.693 [2024-11-19 01:58:02.262997] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.693 [2024-11-19 01:58:02.263001] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.693 [2024-11-19 01:58:02.263005] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.693 [2024-11-19 01:58:02.263012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.693 [2024-11-19 01:58:02.263030] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.693 [2024-11-19 01:58:02.263071] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.693 [2024-11-19 01:58:02.263078] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.693 [2024-11-19 01:58:02.263081] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.693 [2024-11-19 01:58:02.263085] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.693 [2024-11-19 01:58:02.263096] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.693 [2024-11-19 01:58:02.263101] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.693 [2024-11-19 01:58:02.263104] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.693 [2024-11-19 01:58:02.263112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.693 [2024-11-19 01:58:02.263129] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.693 [2024-11-19 01:58:02.263176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.693 [2024-11-19 01:58:02.263183] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.693 [2024-11-19 01:58:02.263186] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.693 [2024-11-19 01:58:02.263190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.693 [2024-11-19 01:58:02.263201] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.693 [2024-11-19 01:58:02.263205] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.693 [2024-11-19 01:58:02.263209] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.693 [2024-11-19 01:58:02.263216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.693 [2024-11-19 01:58:02.263234] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.693 [2024-11-19 01:58:02.263281] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.693 [2024-11-19 01:58:02.263289] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.693 [2024-11-19 01:58:02.263293] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.693 [2024-11-19 01:58:02.263297] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.693 [2024-11-19 01:58:02.263307] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.693 [2024-11-19 01:58:02.263312] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.693 [2024-11-19 01:58:02.263316] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.693 [2024-11-19 01:58:02.263323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.693 [2024-11-19 01:58:02.263341] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.693 [2024-11-19 01:58:02.263384] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.693 [2024-11-19 01:58:02.263391] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.693 [2024-11-19 01:58:02.263394] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.693 [2024-11-19 01:58:02.263398] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.693 [2024-11-19 01:58:02.263409] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.693 [2024-11-19 01:58:02.263414] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.693 [2024-11-19 01:58:02.263417] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.693 [2024-11-19 01:58:02.263424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.693 [2024-11-19 01:58:02.263442] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.693 [2024-11-19 01:58:02.263486] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.693 [2024-11-19 01:58:02.263493] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.693 [2024-11-19 01:58:02.263496] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.693 [2024-11-19 01:58:02.263500] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.693 [2024-11-19 01:58:02.263511] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.693 [2024-11-19 01:58:02.263534] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.693 [2024-11-19 01:58:02.263539] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.693 [2024-11-19 01:58:02.263547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.693 [2024-11-19 01:58:02.263568] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.693 [2024-11-19 01:58:02.263619] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.693 [2024-11-19 01:58:02.263636] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.693 [2024-11-19 01:58:02.263640] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.693 [2024-11-19 01:58:02.263645] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.693 [2024-11-19 01:58:02.263656] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.693 [2024-11-19 01:58:02.263661] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.693 [2024-11-19 01:58:02.263665] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.693 [2024-11-19 01:58:02.263672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.693 [2024-11-19 01:58:02.263691] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.693 [2024-11-19 01:58:02.263741] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.693 [2024-11-19 01:58:02.263753] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.693 [2024-11-19 01:58:02.263757] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.693 [2024-11-19 01:58:02.263761] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.693 [2024-11-19 01:58:02.263772] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.693 [2024-11-19 01:58:02.263777] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.693 [2024-11-19 01:58:02.263781] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.693 [2024-11-19 01:58:02.263788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.693 [2024-11-19 01:58:02.263806] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.693 [2024-11-19 01:58:02.263849] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.693 [2024-11-19 01:58:02.263856] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.693 [2024-11-19 01:58:02.263859] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.693 [2024-11-19 01:58:02.263863] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.693 [2024-11-19 01:58:02.263874] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.693 [2024-11-19 01:58:02.263879] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.693 [2024-11-19 01:58:02.263883] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.693 [2024-11-19 01:58:02.263890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.693 [2024-11-19 01:58:02.263908] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.693 [2024-11-19 01:58:02.263950] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.693 [2024-11-19 01:58:02.263957] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.693 [2024-11-19 01:58:02.263961] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.693 [2024-11-19 01:58:02.263965] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.693 [2024-11-19 01:58:02.263975] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.693 [2024-11-19 01:58:02.263980] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.693 [2024-11-19 01:58:02.263984] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.693 [2024-11-19 01:58:02.263991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.693 [2024-11-19 01:58:02.264008] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.693 [2024-11-19 01:58:02.264055] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.693 [2024-11-19 01:58:02.264062] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.693 [2024-11-19 01:58:02.264066] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.693 [2024-11-19 01:58:02.264070] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.693 [2024-11-19 01:58:02.264080] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.693 [2024-11-19 01:58:02.264085] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.693 [2024-11-19 01:58:02.264089] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.693 [2024-11-19 01:58:02.264096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.693 [2024-11-19 01:58:02.264114] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.693 [2024-11-19 01:58:02.264158] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.693 [2024-11-19 01:58:02.264165] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.693 [2024-11-19 01:58:02.264169] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.694 [2024-11-19 01:58:02.264173] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.694 [2024-11-19 01:58:02.264183] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.694 [2024-11-19 01:58:02.264188] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.694 [2024-11-19 01:58:02.264192] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.694 [2024-11-19 01:58:02.264199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.694 [2024-11-19 01:58:02.264216] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.694 [2024-11-19 01:58:02.264263] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.694 [2024-11-19 01:58:02.264274] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.694 [2024-11-19 01:58:02.264278] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.694 [2024-11-19 01:58:02.264283] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.694 [2024-11-19 01:58:02.264294] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.694 [2024-11-19 01:58:02.264299] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.694 [2024-11-19 01:58:02.264303] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.694 [2024-11-19 01:58:02.264310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.694 [2024-11-19 01:58:02.264328] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.694 [2024-11-19 01:58:02.264373] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.694 [2024-11-19 01:58:02.264380] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.694 [2024-11-19 01:58:02.264384] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.694 [2024-11-19 01:58:02.264388] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.694 [2024-11-19 01:58:02.264398] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.694 [2024-11-19 01:58:02.264403] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.694 [2024-11-19 01:58:02.264407] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.694 [2024-11-19 01:58:02.264414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.694 [2024-11-19 01:58:02.264431] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.694 [2024-11-19 01:58:02.264473] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.694 [2024-11-19 01:58:02.264480] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.694 [2024-11-19 01:58:02.264484] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.694 [2024-11-19 01:58:02.264488] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.694 [2024-11-19 01:58:02.264509] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.694 [2024-11-19 01:58:02.264516] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.694 [2024-11-19 01:58:02.264519] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.694 [2024-11-19 01:58:02.264527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.694 [2024-11-19 01:58:02.264546] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.694 [2024-11-19 01:58:02.264594] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.694 [2024-11-19 01:58:02.264601] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.694 [2024-11-19 01:58:02.264605] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.694 [2024-11-19 01:58:02.264609] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.694 [2024-11-19 01:58:02.264620] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.694 [2024-11-19 01:58:02.264624] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.694 [2024-11-19 01:58:02.264628] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.694 [2024-11-19 01:58:02.264635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.694 [2024-11-19 01:58:02.264653] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.694 [2024-11-19 01:58:02.264702] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.694 [2024-11-19 01:58:02.264709] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.694 [2024-11-19 01:58:02.264712] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.694 [2024-11-19 01:58:02.264716] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.694 [2024-11-19 01:58:02.264727] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.694 [2024-11-19 01:58:02.264731] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.694 [2024-11-19 01:58:02.264735] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.694 [2024-11-19 01:58:02.264742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.694 [2024-11-19 01:58:02.264760] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.694 [2024-11-19 01:58:02.264804] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.694 [2024-11-19 01:58:02.264811] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.694 [2024-11-19 01:58:02.264815] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.694 [2024-11-19 01:58:02.264819] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.694 [2024-11-19 01:58:02.264830] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.694 [2024-11-19 01:58:02.264834] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.694 [2024-11-19 01:58:02.264838] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.694 [2024-11-19 01:58:02.264845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.694 [2024-11-19 01:58:02.264863] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.694 [2024-11-19 01:58:02.264907] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.694 [2024-11-19 01:58:02.264913] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.694 [2024-11-19 01:58:02.264917] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.694 [2024-11-19 01:58:02.264921] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.694 [2024-11-19 01:58:02.264931] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.694 [2024-11-19 01:58:02.264936] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.694 [2024-11-19 01:58:02.264940] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.694 [2024-11-19 01:58:02.264953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.694 [2024-11-19 01:58:02.264971] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.694 [2024-11-19 01:58:02.265013] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.694 [2024-11-19 01:58:02.265020] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.694 [2024-11-19 01:58:02.265024] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.694 [2024-11-19 01:58:02.265028] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.694 [2024-11-19 01:58:02.265038] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.694 [2024-11-19 01:58:02.265043] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.694 [2024-11-19 01:58:02.265047] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.694 [2024-11-19 01:58:02.265054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.694 [2024-11-19 01:58:02.265071] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.694 [2024-11-19 01:58:02.265113] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.694 [2024-11-19 01:58:02.265120] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.694 [2024-11-19 01:58:02.265124] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.694 [2024-11-19 01:58:02.265128] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.694 [2024-11-19 01:58:02.265138] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.694 [2024-11-19 01:58:02.265143] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.694 [2024-11-19 01:58:02.265147] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.694 [2024-11-19 01:58:02.265154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.694 [2024-11-19 01:58:02.265171] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.694 [2024-11-19 01:58:02.265218] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.694 [2024-11-19 01:58:02.265225] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.694 [2024-11-19 01:58:02.265229] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.694 [2024-11-19 01:58:02.265233] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.694 [2024-11-19 01:58:02.265243] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.694 [2024-11-19 01:58:02.265248] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.694 [2024-11-19 01:58:02.265252] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.694 [2024-11-19 01:58:02.265259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.694 [2024-11-19 01:58:02.265276] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.694 [2024-11-19 01:58:02.265323] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.694 [2024-11-19 01:58:02.265330] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.694 [2024-11-19 01:58:02.265334] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.694 [2024-11-19 01:58:02.265338] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.694 [2024-11-19 01:58:02.265348] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.694 [2024-11-19 01:58:02.265353] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.694 [2024-11-19 01:58:02.265357] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.694 [2024-11-19 01:58:02.265364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.694 [2024-11-19 01:58:02.265381] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.694 [2024-11-19 01:58:02.265423] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.695 [2024-11-19 01:58:02.265430] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.695 [2024-11-19 01:58:02.265433] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.695 [2024-11-19 01:58:02.265437] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.695 [2024-11-19 01:58:02.265448] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.695 [2024-11-19 01:58:02.265453] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.695 [2024-11-19 01:58:02.265457] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.695 [2024-11-19 01:58:02.265464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.695 [2024-11-19 01:58:02.265481] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.695 [2024-11-19 01:58:02.265539] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.695 [2024-11-19 01:58:02.265547] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.695 [2024-11-19 01:58:02.265551] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.695 [2024-11-19 01:58:02.265555] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.695 [2024-11-19 01:58:02.265566] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.695 [2024-11-19 01:58:02.265571] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.695 [2024-11-19 01:58:02.265575] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.695 [2024-11-19 01:58:02.265582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.695 [2024-11-19 01:58:02.265601] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.695 [2024-11-19 01:58:02.265649] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.695 [2024-11-19 01:58:02.265656] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.695 [2024-11-19 01:58:02.265660] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.695 [2024-11-19 01:58:02.265664] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.695 [2024-11-19 01:58:02.265674] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.695 [2024-11-19 01:58:02.265679] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.695 [2024-11-19 01:58:02.265683] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.695 [2024-11-19 01:58:02.265690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.695 [2024-11-19 01:58:02.265708] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.695 [2024-11-19 01:58:02.265756] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.695 [2024-11-19 01:58:02.265763] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.695 [2024-11-19 01:58:02.265767] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.695 [2024-11-19 01:58:02.265771] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.695 [2024-11-19 01:58:02.265781] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.695 [2024-11-19 01:58:02.265786] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.695 [2024-11-19 01:58:02.265789] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.695 [2024-11-19 01:58:02.265797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.695 [2024-11-19 01:58:02.265814] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.695 [2024-11-19 01:58:02.265889] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.695 [2024-11-19 01:58:02.265897] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.695 [2024-11-19 01:58:02.265901] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.695 [2024-11-19 01:58:02.265905] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.695 [2024-11-19 01:58:02.265916] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.695 [2024-11-19 01:58:02.265921] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.695 [2024-11-19 01:58:02.265925] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.695 [2024-11-19 01:58:02.265933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.695 [2024-11-19 01:58:02.265952] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.695 [2024-11-19 01:58:02.266007] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.695 [2024-11-19 01:58:02.266014] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.695 [2024-11-19 01:58:02.266018] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.695 [2024-11-19 01:58:02.266022] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.695 [2024-11-19 01:58:02.266033] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.695 [2024-11-19 01:58:02.266038] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.695 [2024-11-19 01:58:02.266042] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.695 [2024-11-19 01:58:02.266049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.695 [2024-11-19 01:58:02.266067] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.695 [2024-11-19 01:58:02.266110] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.695 [2024-11-19 01:58:02.266118] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.695 [2024-11-19 01:58:02.266121] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.695 [2024-11-19 01:58:02.266126] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.695 [2024-11-19 01:58:02.266137] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.695 [2024-11-19 01:58:02.266141] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.695 [2024-11-19 01:58:02.266145] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.695 [2024-11-19 01:58:02.266153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.695 [2024-11-19 01:58:02.266171] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.695 [2024-11-19 01:58:02.266219] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.695 [2024-11-19 01:58:02.266226] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.695 [2024-11-19 01:58:02.266230] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.695 [2024-11-19 01:58:02.266234] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.695 [2024-11-19 01:58:02.266260] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.695 [2024-11-19 01:58:02.266265] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.695 [2024-11-19 01:58:02.266269] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.695 [2024-11-19 01:58:02.266276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.695 [2024-11-19 01:58:02.266293] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.695 [2024-11-19 01:58:02.266337] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.695 [2024-11-19 01:58:02.266344] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.695 [2024-11-19 01:58:02.266348] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.695 [2024-11-19 01:58:02.266352] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.695 [2024-11-19 01:58:02.266362] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.695 [2024-11-19 01:58:02.266367] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.695 [2024-11-19 01:58:02.266371] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.695 [2024-11-19 01:58:02.266378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.695 [2024-11-19 01:58:02.266395] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.695 [2024-11-19 01:58:02.266442] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.695 [2024-11-19 01:58:02.266449] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.695 [2024-11-19 01:58:02.266453] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.695 [2024-11-19 01:58:02.266457] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.695 [2024-11-19 01:58:02.266467] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.695 [2024-11-19 01:58:02.266473] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.695 [2024-11-19 01:58:02.266476] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.695 [2024-11-19 01:58:02.266484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.695 [2024-11-19 01:58:02.266501] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.695 [2024-11-19 01:58:02.270589] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.695 [2024-11-19 01:58:02.270607] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.695 [2024-11-19 01:58:02.270612] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.695 [2024-11-19 01:58:02.270633] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.695 [2024-11-19 01:58:02.270648] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:51.695 [2024-11-19 01:58:02.270653] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:51.695 [2024-11-19 01:58:02.270657] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd819f0) 00:16:51.695 [2024-11-19 01:58:02.270665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.695 [2024-11-19 01:58:02.270690] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xdbab40, cid 3, qid 0 00:16:51.695 [2024-11-19 01:58:02.270739] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:51.695 [2024-11-19 01:58:02.270746] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:51.695 [2024-11-19 01:58:02.270749] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:51.696 [2024-11-19 01:58:02.270753] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xdbab40) on tqpair=0xd819f0 00:16:51.696 [2024-11-19 01:58:02.270761] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:16:51.696 0% 00:16:51.696 Data Units Read: 0 00:16:51.696 Data Units Written: 0 00:16:51.696 Host Read Commands: 0 00:16:51.696 Host Write Commands: 0 00:16:51.696 Controller Busy Time: 0 minutes 00:16:51.696 Power Cycles: 0 00:16:51.696 Power On Hours: 0 hours 00:16:51.696 Unsafe Shutdowns: 0 00:16:51.696 Unrecoverable Media Errors: 0 00:16:51.696 Lifetime Error Log Entries: 0 00:16:51.696 Warning Temperature Time: 0 minutes 00:16:51.696 Critical Temperature Time: 0 minutes 00:16:51.696 00:16:51.696 Number of Queues 00:16:51.696 ================ 00:16:51.696 Number of I/O Submission Queues: 127 00:16:51.696 Number of I/O Completion Queues: 127 00:16:51.696 00:16:51.696 Active Namespaces 00:16:51.696 ================= 00:16:51.696 Namespace ID:1 00:16:51.696 Error Recovery Timeout: Unlimited 00:16:51.696 Command Set Identifier: NVM (00h) 00:16:51.696 Deallocate: Supported 00:16:51.696 Deallocated/Unwritten Error: Not Supported 00:16:51.696 Deallocated Read Value: Unknown 00:16:51.696 Deallocate in Write Zeroes: Not Supported 00:16:51.696 Deallocated Guard Field: 0xFFFF 00:16:51.696 Flush: Supported 00:16:51.696 Reservation: Supported 00:16:51.696 Namespace Sharing Capabilities: Multiple Controllers 00:16:51.696 Size (in LBAs): 131072 (0GiB) 00:16:51.696 Capacity (in LBAs): 131072 (0GiB) 00:16:51.696 Utilization (in LBAs): 131072 (0GiB) 00:16:51.696 NGUID: ABCDEF0123456789ABCDEF0123456789 00:16:51.696 EUI64: ABCDEF0123456789 00:16:51.696 UUID: ce96616e-3667-46ce-a5e0-efb682d309ba 00:16:51.696 Thin Provisioning: Not Supported 00:16:51.696 Per-NS Atomic Units: Yes 00:16:51.696 Atomic Boundary Size (Normal): 0 00:16:51.696 Atomic Boundary Size (PFail): 0 00:16:51.696 Atomic Boundary Offset: 0 00:16:51.696 Maximum Single Source Range Length: 65535 00:16:51.696 Maximum Copy Length: 65535 00:16:51.696 Maximum Source Range Count: 1 00:16:51.696 NGUID/EUI64 Never Reused: No 00:16:51.696 Namespace Write Protected: No 00:16:51.696 Number of LBA Formats: 1 00:16:51.696 Current LBA Format: LBA Format #00 00:16:51.696 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:51.696 00:16:51.696 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:16:51.955 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:51.955 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.955 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:51.955 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.955 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:16:51.955 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:16:51.955 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:51.955 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:16:51.955 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:51.956 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:16:51.956 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:51.956 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:51.956 rmmod nvme_tcp 00:16:51.956 rmmod nvme_fabrics 00:16:51.956 rmmod nvme_keyring 00:16:51.956 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:51.956 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:16:51.956 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:16:51.956 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 88122 ']' 00:16:51.956 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 88122 00:16:51.956 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 88122 ']' 00:16:51.956 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 88122 00:16:51.956 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:16:51.956 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:51.956 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88122 00:16:51.956 killing process with pid 88122 00:16:51.956 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:51.956 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:51.956 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88122' 00:16:51.956 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 88122 00:16:51.956 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 88122 00:16:52.214 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:52.214 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:52.214 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:52.214 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:16:52.214 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:16:52.214 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:16:52.215 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:52.215 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:52.215 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:52.215 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:52.215 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:52.215 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:52.215 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:52.215 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:52.215 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:52.215 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:52.215 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:52.215 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:52.215 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:52.215 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:52.215 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:52.215 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:52.215 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:52.215 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:52.215 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:52.215 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:52.475 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:16:52.475 ************************************ 00:16:52.475 END TEST nvmf_identify 00:16:52.475 ************************************ 00:16:52.475 00:16:52.475 real 0m2.086s 00:16:52.475 user 0m4.194s 00:16:52.475 sys 0m0.668s 00:16:52.475 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:52.475 01:58:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:52.475 01:58:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:52.475 01:58:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:52.475 01:58:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:52.475 01:58:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.475 ************************************ 00:16:52.475 START TEST nvmf_perf 00:16:52.475 ************************************ 00:16:52.475 01:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:52.475 * Looking for test storage... 00:16:52.475 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:52.475 01:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:52.475 01:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:16:52.475 01:58:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:52.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.475 --rc genhtml_branch_coverage=1 00:16:52.475 --rc genhtml_function_coverage=1 00:16:52.475 --rc genhtml_legend=1 00:16:52.475 --rc geninfo_all_blocks=1 00:16:52.475 --rc geninfo_unexecuted_blocks=1 00:16:52.475 00:16:52.475 ' 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:52.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.475 --rc genhtml_branch_coverage=1 00:16:52.475 --rc genhtml_function_coverage=1 00:16:52.475 --rc genhtml_legend=1 00:16:52.475 --rc geninfo_all_blocks=1 00:16:52.475 --rc geninfo_unexecuted_blocks=1 00:16:52.475 00:16:52.475 ' 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:52.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.475 --rc genhtml_branch_coverage=1 00:16:52.475 --rc genhtml_function_coverage=1 00:16:52.475 --rc genhtml_legend=1 00:16:52.475 --rc geninfo_all_blocks=1 00:16:52.475 --rc geninfo_unexecuted_blocks=1 00:16:52.475 00:16:52.475 ' 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:52.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:52.475 --rc genhtml_branch_coverage=1 00:16:52.475 --rc genhtml_function_coverage=1 00:16:52.475 --rc genhtml_legend=1 00:16:52.475 --rc geninfo_all_blocks=1 00:16:52.475 --rc geninfo_unexecuted_blocks=1 00:16:52.475 00:16:52.475 ' 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.475 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:52.476 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:52.476 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:52.735 Cannot find device "nvmf_init_br" 00:16:52.735 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:16:52.735 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:52.735 Cannot find device "nvmf_init_br2" 00:16:52.735 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:16:52.735 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:52.735 Cannot find device "nvmf_tgt_br" 00:16:52.735 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:16:52.735 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:52.735 Cannot find device "nvmf_tgt_br2" 00:16:52.735 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:16:52.735 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:52.735 Cannot find device "nvmf_init_br" 00:16:52.735 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:16:52.735 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:52.735 Cannot find device "nvmf_init_br2" 00:16:52.735 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:16:52.735 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:52.735 Cannot find device "nvmf_tgt_br" 00:16:52.735 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:16:52.735 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:52.735 Cannot find device "nvmf_tgt_br2" 00:16:52.735 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:16:52.735 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:52.735 Cannot find device "nvmf_br" 00:16:52.735 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:16:52.735 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:52.735 Cannot find device "nvmf_init_if" 00:16:52.735 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:16:52.735 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:52.735 Cannot find device "nvmf_init_if2" 00:16:52.735 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:16:52.735 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:52.735 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:52.735 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:16:52.735 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:52.735 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:52.735 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:16:52.735 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:52.735 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:52.735 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:52.735 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:52.735 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:52.735 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:52.735 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:52.735 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:52.735 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:52.735 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:52.735 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:52.994 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:52.994 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:16:52.994 00:16:52.994 --- 10.0.0.3 ping statistics --- 00:16:52.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.994 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:52.994 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:52.994 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.078 ms 00:16:52.994 00:16:52.994 --- 10.0.0.4 ping statistics --- 00:16:52.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.994 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:52.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:52.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:16:52.994 00:16:52.994 --- 10.0.0.1 ping statistics --- 00:16:52.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.994 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:52.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:52.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:16:52.994 00:16:52.994 --- 10.0.0.2 ping statistics --- 00:16:52.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.994 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=88377 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 88377 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 88377 ']' 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:52.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:52.994 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:52.994 [2024-11-19 01:58:03.569413] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:16:52.994 [2024-11-19 01:58:03.569688] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:53.253 [2024-11-19 01:58:03.722449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:53.253 [2024-11-19 01:58:03.746311] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:53.253 [2024-11-19 01:58:03.746660] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:53.253 [2024-11-19 01:58:03.746835] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:53.253 [2024-11-19 01:58:03.746986] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:53.253 [2024-11-19 01:58:03.747028] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:53.253 [2024-11-19 01:58:03.748005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.253 [2024-11-19 01:58:03.748151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:53.253 [2024-11-19 01:58:03.748711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:53.253 [2024-11-19 01:58:03.748718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.253 [2024-11-19 01:58:03.781521] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:53.253 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:53.253 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:16:53.253 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:53.253 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:53.253 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:53.253 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:53.253 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:53.253 01:58:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:16:53.820 01:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:16:53.820 01:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:16:54.078 01:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:16:54.078 01:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:54.336 01:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:16:54.336 01:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:16:54.336 01:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:16:54.336 01:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:16:54.336 01:58:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:54.595 [2024-11-19 01:58:05.162361] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:54.595 01:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:54.853 01:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:54.853 01:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:55.112 01:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:55.112 01:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:16:55.412 01:58:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:55.688 [2024-11-19 01:58:06.255757] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:55.688 01:58:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:16:55.947 01:58:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:16:55.947 01:58:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:55.947 01:58:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:16:55.947 01:58:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:57.323 Initializing NVMe Controllers 00:16:57.323 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:57.323 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:16:57.323 Initialization complete. Launching workers. 00:16:57.323 ======================================================== 00:16:57.323 Latency(us) 00:16:57.323 Device Information : IOPS MiB/s Average min max 00:16:57.323 PCIE (0000:00:10.0) NSID 1 from core 0: 22752.00 88.88 1405.94 407.52 8787.15 00:16:57.323 ======================================================== 00:16:57.323 Total : 22752.00 88.88 1405.94 407.52 8787.15 00:16:57.323 00:16:57.323 01:58:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:58.695 Initializing NVMe Controllers 00:16:58.695 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:58.695 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:58.695 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:58.695 Initialization complete. Launching workers. 00:16:58.695 ======================================================== 00:16:58.695 Latency(us) 00:16:58.695 Device Information : IOPS MiB/s Average min max 00:16:58.695 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4004.04 15.64 249.42 94.91 7091.28 00:16:58.695 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.50 0.49 8095.34 7002.90 12054.38 00:16:58.695 ======================================================== 00:16:58.695 Total : 4128.54 16.13 486.03 94.91 12054.38 00:16:58.695 00:16:58.695 01:58:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:00.071 Initializing NVMe Controllers 00:17:00.071 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:00.071 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:00.071 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:00.071 Initialization complete. Launching workers. 00:17:00.071 ======================================================== 00:17:00.071 Latency(us) 00:17:00.071 Device Information : IOPS MiB/s Average min max 00:17:00.071 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8986.96 35.11 3563.23 564.82 7249.64 00:17:00.071 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4016.41 15.69 8000.90 6093.18 11773.08 00:17:00.071 ======================================================== 00:17:00.071 Total : 13003.36 50.79 4933.91 564.82 11773.08 00:17:00.071 00:17:00.071 01:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:17:00.071 01:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:02.605 Initializing NVMe Controllers 00:17:02.605 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:02.605 Controller IO queue size 128, less than required. 00:17:02.606 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:02.606 Controller IO queue size 128, less than required. 00:17:02.606 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:02.606 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:02.606 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:02.606 Initialization complete. Launching workers. 00:17:02.606 ======================================================== 00:17:02.606 Latency(us) 00:17:02.606 Device Information : IOPS MiB/s Average min max 00:17:02.606 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1970.46 492.61 65841.41 35134.01 90178.88 00:17:02.606 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 661.99 165.50 196927.31 54610.93 318723.99 00:17:02.606 ======================================================== 00:17:02.606 Total : 2632.44 658.11 98805.84 35134.01 318723.99 00:17:02.606 00:17:02.606 01:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:17:02.606 Initializing NVMe Controllers 00:17:02.606 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:02.606 Controller IO queue size 128, less than required. 00:17:02.606 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:02.606 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:17:02.606 Controller IO queue size 128, less than required. 00:17:02.606 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:02.606 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:17:02.606 WARNING: Some requested NVMe devices were skipped 00:17:02.606 No valid NVMe controllers or AIO or URING devices found 00:17:02.606 01:58:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:17:05.141 Initializing NVMe Controllers 00:17:05.141 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:05.141 Controller IO queue size 128, less than required. 00:17:05.141 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:05.141 Controller IO queue size 128, less than required. 00:17:05.141 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:05.141 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:05.141 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:05.141 Initialization complete. Launching workers. 00:17:05.141 00:17:05.141 ==================== 00:17:05.141 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:17:05.141 TCP transport: 00:17:05.141 polls: 13069 00:17:05.141 idle_polls: 9446 00:17:05.141 sock_completions: 3623 00:17:05.141 nvme_completions: 6679 00:17:05.141 submitted_requests: 10004 00:17:05.141 queued_requests: 1 00:17:05.141 00:17:05.141 ==================== 00:17:05.141 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:17:05.141 TCP transport: 00:17:05.141 polls: 12073 00:17:05.141 idle_polls: 7364 00:17:05.141 sock_completions: 4709 00:17:05.141 nvme_completions: 6907 00:17:05.141 submitted_requests: 10282 00:17:05.141 queued_requests: 1 00:17:05.141 ======================================================== 00:17:05.141 Latency(us) 00:17:05.141 Device Information : IOPS MiB/s Average min max 00:17:05.141 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1669.48 417.37 77899.45 40213.26 128798.20 00:17:05.141 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1726.47 431.62 75153.09 31479.69 107890.33 00:17:05.141 ======================================================== 00:17:05.141 Total : 3395.95 848.99 76503.22 31479.69 128798.20 00:17:05.141 00:17:05.141 01:58:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:17:05.400 01:58:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:05.659 01:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:17:05.659 01:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:17:05.659 01:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:17:05.918 01:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=0d2c4bc0-b14f-413f-82ba-78b8055b20d6 00:17:05.918 01:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 0d2c4bc0-b14f-413f-82ba-78b8055b20d6 00:17:05.918 01:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=0d2c4bc0-b14f-413f-82ba-78b8055b20d6 00:17:05.918 01:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:17:05.918 01:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:17:05.918 01:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:17:05.918 01:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:06.177 01:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:17:06.177 { 00:17:06.177 "uuid": "0d2c4bc0-b14f-413f-82ba-78b8055b20d6", 00:17:06.177 "name": "lvs_0", 00:17:06.177 "base_bdev": "Nvme0n1", 00:17:06.177 "total_data_clusters": 1278, 00:17:06.177 "free_clusters": 1278, 00:17:06.177 "block_size": 4096, 00:17:06.177 "cluster_size": 4194304 00:17:06.177 } 00:17:06.177 ]' 00:17:06.177 01:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="0d2c4bc0-b14f-413f-82ba-78b8055b20d6") .free_clusters' 00:17:06.177 01:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=1278 00:17:06.177 01:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="0d2c4bc0-b14f-413f-82ba-78b8055b20d6") .cluster_size' 00:17:06.177 5112 00:17:06.177 01:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:17:06.177 01:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=5112 00:17:06.177 01:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 5112 00:17:06.177 01:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:17:06.177 01:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0d2c4bc0-b14f-413f-82ba-78b8055b20d6 lbd_0 5112 00:17:06.436 01:58:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=7d8475b6-f7d2-4da3-8640-4083c00cc843 00:17:06.436 01:58:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 7d8475b6-f7d2-4da3-8640-4083c00cc843 lvs_n_0 00:17:07.004 01:58:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=883b8d7c-f25e-4dde-aa42-048dfbb602c5 00:17:07.004 01:58:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 883b8d7c-f25e-4dde-aa42-048dfbb602c5 00:17:07.004 01:58:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=883b8d7c-f25e-4dde-aa42-048dfbb602c5 00:17:07.004 01:58:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:17:07.004 01:58:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:17:07.004 01:58:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:17:07.004 01:58:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:07.263 01:58:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:17:07.263 { 00:17:07.263 "uuid": "0d2c4bc0-b14f-413f-82ba-78b8055b20d6", 00:17:07.263 "name": "lvs_0", 00:17:07.263 "base_bdev": "Nvme0n1", 00:17:07.263 "total_data_clusters": 1278, 00:17:07.263 "free_clusters": 0, 00:17:07.263 "block_size": 4096, 00:17:07.263 "cluster_size": 4194304 00:17:07.263 }, 00:17:07.263 { 00:17:07.263 "uuid": "883b8d7c-f25e-4dde-aa42-048dfbb602c5", 00:17:07.263 "name": "lvs_n_0", 00:17:07.263 "base_bdev": "7d8475b6-f7d2-4da3-8640-4083c00cc843", 00:17:07.263 "total_data_clusters": 1276, 00:17:07.263 "free_clusters": 1276, 00:17:07.263 "block_size": 4096, 00:17:07.263 "cluster_size": 4194304 00:17:07.263 } 00:17:07.263 ]' 00:17:07.263 01:58:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="883b8d7c-f25e-4dde-aa42-048dfbb602c5") .free_clusters' 00:17:07.263 01:58:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=1276 00:17:07.263 01:58:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="883b8d7c-f25e-4dde-aa42-048dfbb602c5") .cluster_size' 00:17:07.263 5104 00:17:07.263 01:58:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:17:07.263 01:58:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=5104 00:17:07.263 01:58:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 5104 00:17:07.263 01:58:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:17:07.263 01:58:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 883b8d7c-f25e-4dde-aa42-048dfbb602c5 lbd_nest_0 5104 00:17:07.522 01:58:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=4637dedd-ef9f-47d3-88a6-0196d0a7e425 00:17:07.522 01:58:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:07.780 01:58:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:17:07.780 01:58:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 4637dedd-ef9f-47d3-88a6-0196d0a7e425 00:17:08.039 01:58:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:08.298 01:58:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:17:08.298 01:58:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:17:08.298 01:58:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:17:08.298 01:58:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:17:08.298 01:58:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:08.556 Initializing NVMe Controllers 00:17:08.556 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:08.556 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:17:08.557 WARNING: Some requested NVMe devices were skipped 00:17:08.557 No valid NVMe controllers or AIO or URING devices found 00:17:08.557 01:58:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:17:08.557 01:58:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:20.763 Initializing NVMe Controllers 00:17:20.763 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:20.763 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:20.763 Initialization complete. Launching workers. 00:17:20.763 ======================================================== 00:17:20.763 Latency(us) 00:17:20.763 Device Information : IOPS MiB/s Average min max 00:17:20.763 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 975.50 121.94 1024.73 319.04 8555.28 00:17:20.763 ======================================================== 00:17:20.763 Total : 975.50 121.94 1024.73 319.04 8555.28 00:17:20.763 00:17:20.763 01:58:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:17:20.763 01:58:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:17:20.763 01:58:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:20.763 Initializing NVMe Controllers 00:17:20.763 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:20.763 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:17:20.763 WARNING: Some requested NVMe devices were skipped 00:17:20.763 No valid NVMe controllers or AIO or URING devices found 00:17:20.763 01:58:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:17:20.763 01:58:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:30.743 Initializing NVMe Controllers 00:17:30.743 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:30.743 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:30.743 Initialization complete. Launching workers. 00:17:30.743 ======================================================== 00:17:30.743 Latency(us) 00:17:30.743 Device Information : IOPS MiB/s Average min max 00:17:30.743 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1321.09 165.14 24235.36 3658.13 71476.69 00:17:30.743 ======================================================== 00:17:30.743 Total : 1321.09 165.14 24235.36 3658.13 71476.69 00:17:30.743 00:17:30.743 01:58:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:17:30.743 01:58:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:17:30.743 01:58:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:30.743 Initializing NVMe Controllers 00:17:30.743 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:30.743 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:17:30.743 WARNING: Some requested NVMe devices were skipped 00:17:30.743 No valid NVMe controllers or AIO or URING devices found 00:17:30.743 01:58:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:17:30.743 01:58:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:40.724 Initializing NVMe Controllers 00:17:40.724 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:40.724 Controller IO queue size 128, less than required. 00:17:40.724 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:40.724 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:40.724 Initialization complete. Launching workers. 00:17:40.724 ======================================================== 00:17:40.724 Latency(us) 00:17:40.724 Device Information : IOPS MiB/s Average min max 00:17:40.724 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4074.70 509.34 31472.91 14195.58 62589.89 00:17:40.724 ======================================================== 00:17:40.724 Total : 4074.70 509.34 31472.91 14195.58 62589.89 00:17:40.724 00:17:40.724 01:58:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:40.724 01:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4637dedd-ef9f-47d3-88a6-0196d0a7e425 00:17:40.983 01:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:17:41.241 01:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 7d8475b6-f7d2-4da3-8640-4083c00cc843 00:17:41.500 01:58:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:17:41.759 01:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:17:41.759 01:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:17:41.759 01:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:41.759 01:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:17:41.759 01:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:41.759 01:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:17:41.759 01:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:41.759 01:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:41.759 rmmod nvme_tcp 00:17:41.759 rmmod nvme_fabrics 00:17:41.759 rmmod nvme_keyring 00:17:41.759 01:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:41.759 01:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:17:41.759 01:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:17:41.759 01:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 88377 ']' 00:17:41.759 01:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 88377 00:17:41.759 01:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 88377 ']' 00:17:41.759 01:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 88377 00:17:41.759 01:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:17:41.759 01:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:41.759 01:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88377 00:17:41.759 killing process with pid 88377 00:17:41.759 01:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:41.759 01:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:41.759 01:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88377' 00:17:41.759 01:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 88377 00:17:41.759 01:58:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 88377 00:17:43.137 01:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:43.137 01:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:43.137 01:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:43.137 01:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:17:43.137 01:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:43.137 01:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:17:43.137 01:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:17:43.137 01:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:43.137 01:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:43.137 01:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:43.137 01:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:43.137 01:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:43.137 01:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:43.137 01:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:43.137 01:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:43.137 01:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:43.137 01:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:43.137 01:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:43.137 01:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:43.396 01:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:43.396 01:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:43.396 01:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:43.396 01:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:43.396 01:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.396 01:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:43.396 01:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.397 01:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:17:43.397 00:17:43.397 real 0m50.974s 00:17:43.397 user 3m12.587s 00:17:43.397 sys 0m11.796s 00:17:43.397 ************************************ 00:17:43.397 END TEST nvmf_perf 00:17:43.397 ************************************ 00:17:43.397 01:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:43.397 01:58:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:43.397 01:58:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:43.397 01:58:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:43.397 01:58:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:43.397 01:58:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.397 ************************************ 00:17:43.397 START TEST nvmf_fio_host 00:17:43.397 ************************************ 00:17:43.397 01:58:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:43.397 * Looking for test storage... 00:17:43.397 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:43.397 01:58:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:43.397 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:43.397 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:43.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.657 --rc genhtml_branch_coverage=1 00:17:43.657 --rc genhtml_function_coverage=1 00:17:43.657 --rc genhtml_legend=1 00:17:43.657 --rc geninfo_all_blocks=1 00:17:43.657 --rc geninfo_unexecuted_blocks=1 00:17:43.657 00:17:43.657 ' 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:43.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.657 --rc genhtml_branch_coverage=1 00:17:43.657 --rc genhtml_function_coverage=1 00:17:43.657 --rc genhtml_legend=1 00:17:43.657 --rc geninfo_all_blocks=1 00:17:43.657 --rc geninfo_unexecuted_blocks=1 00:17:43.657 00:17:43.657 ' 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:43.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.657 --rc genhtml_branch_coverage=1 00:17:43.657 --rc genhtml_function_coverage=1 00:17:43.657 --rc genhtml_legend=1 00:17:43.657 --rc geninfo_all_blocks=1 00:17:43.657 --rc geninfo_unexecuted_blocks=1 00:17:43.657 00:17:43.657 ' 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:43.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.657 --rc genhtml_branch_coverage=1 00:17:43.657 --rc genhtml_function_coverage=1 00:17:43.657 --rc genhtml_legend=1 00:17:43.657 --rc geninfo_all_blocks=1 00:17:43.657 --rc geninfo_unexecuted_blocks=1 00:17:43.657 00:17:43.657 ' 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:43.657 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:43.658 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:43.658 Cannot find device "nvmf_init_br" 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:43.658 Cannot find device "nvmf_init_br2" 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:43.658 Cannot find device "nvmf_tgt_br" 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:43.658 Cannot find device "nvmf_tgt_br2" 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:43.658 Cannot find device "nvmf_init_br" 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:43.658 Cannot find device "nvmf_init_br2" 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:43.658 Cannot find device "nvmf_tgt_br" 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:43.658 Cannot find device "nvmf_tgt_br2" 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:43.658 Cannot find device "nvmf_br" 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:43.658 Cannot find device "nvmf_init_if" 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:17:43.658 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:43.918 Cannot find device "nvmf_init_if2" 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:43.918 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:43.918 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:43.918 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:43.918 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:17:43.918 00:17:43.918 --- 10.0.0.3 ping statistics --- 00:17:43.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.918 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:43.918 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:43.918 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:17:43.918 00:17:43.918 --- 10.0.0.4 ping statistics --- 00:17:43.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.918 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:43.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:43.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:43.918 00:17:43.918 --- 10.0.0.1 ping statistics --- 00:17:43.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.918 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:43.918 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:43.918 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:17:43.918 00:17:43.918 --- 10.0.0.2 ping statistics --- 00:17:43.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.918 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:43.918 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:44.177 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:17:44.177 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:17:44.177 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:44.177 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.177 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=89238 00:17:44.177 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:44.177 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:44.177 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 89238 00:17:44.177 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 89238 ']' 00:17:44.177 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.177 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:44.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.177 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.177 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:44.177 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.177 [2024-11-19 01:58:54.602134] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:17:44.177 [2024-11-19 01:58:54.602229] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.177 [2024-11-19 01:58:54.755287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:44.177 [2024-11-19 01:58:54.779365] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:44.177 [2024-11-19 01:58:54.779630] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:44.177 [2024-11-19 01:58:54.779881] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:44.177 [2024-11-19 01:58:54.780059] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:44.177 [2024-11-19 01:58:54.780222] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:44.177 [2024-11-19 01:58:54.781272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.177 [2024-11-19 01:58:54.781403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.177 [2024-11-19 01:58:54.781464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:44.177 [2024-11-19 01:58:54.781467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.435 [2024-11-19 01:58:54.815473] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:44.435 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:44.435 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:17:44.435 01:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:44.707 [2024-11-19 01:58:55.135078] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:44.707 01:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:17:44.707 01:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:44.707 01:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.707 01:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:45.003 Malloc1 00:17:45.003 01:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:45.284 01:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:45.588 01:58:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:45.846 [2024-11-19 01:58:56.228832] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:45.846 01:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:46.105 01:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:17:46.105 01:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:46.105 01:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:46.105 01:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:46.105 01:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:46.105 01:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:46.105 01:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:46.105 01:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:17:46.105 01:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:46.105 01:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:46.105 01:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:46.105 01:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:46.105 01:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:17:46.105 01:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:46.105 01:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:46.105 01:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:46.105 01:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:46.105 01:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:17:46.105 01:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:46.105 01:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:46.105 01:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:46.105 01:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:46.105 01:58:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:46.105 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:46.105 fio-3.35 00:17:46.105 Starting 1 thread 00:17:48.635 00:17:48.635 test: (groupid=0, jobs=1): err= 0: pid=89308: Tue Nov 19 01:58:59 2024 00:17:48.635 read: IOPS=9299, BW=36.3MiB/s (38.1MB/s)(72.9MiB/2007msec) 00:17:48.635 slat (nsec): min=1868, max=317908, avg=2360.68, stdev=3098.63 00:17:48.635 clat (usec): min=2549, max=13209, avg=7160.24, stdev=604.49 00:17:48.635 lat (usec): min=2608, max=13211, avg=7162.60, stdev=604.33 00:17:48.635 clat percentiles (usec): 00:17:48.635 | 1.00th=[ 5997], 5.00th=[ 6325], 10.00th=[ 6456], 20.00th=[ 6652], 00:17:48.635 | 30.00th=[ 6849], 40.00th=[ 6980], 50.00th=[ 7111], 60.00th=[ 7242], 00:17:48.635 | 70.00th=[ 7439], 80.00th=[ 7635], 90.00th=[ 7898], 95.00th=[ 8160], 00:17:48.635 | 99.00th=[ 8586], 99.50th=[ 8979], 99.90th=[11731], 99.95th=[12387], 00:17:48.635 | 99.99th=[13173] 00:17:48.635 bw ( KiB/s): min=36432, max=38488, per=99.97%, avg=37188.00, stdev=905.20, samples=4 00:17:48.635 iops : min= 9108, max= 9622, avg=9297.00, stdev=226.30, samples=4 00:17:48.635 write: IOPS=9303, BW=36.3MiB/s (38.1MB/s)(72.9MiB/2007msec); 0 zone resets 00:17:48.635 slat (nsec): min=1942, max=258358, avg=2429.28, stdev=2306.51 00:17:48.635 clat (usec): min=2411, max=12369, avg=6534.16, stdev=544.79 00:17:48.635 lat (usec): min=2425, max=12371, avg=6536.59, stdev=544.74 00:17:48.635 clat percentiles (usec): 00:17:48.635 | 1.00th=[ 5473], 5.00th=[ 5800], 10.00th=[ 5932], 20.00th=[ 6128], 00:17:48.635 | 30.00th=[ 6259], 40.00th=[ 6390], 50.00th=[ 6521], 60.00th=[ 6652], 00:17:48.635 | 70.00th=[ 6783], 80.00th=[ 6980], 90.00th=[ 7177], 95.00th=[ 7439], 00:17:48.635 | 99.00th=[ 7963], 99.50th=[ 8160], 99.90th=[ 9896], 99.95th=[11731], 00:17:48.635 | 99.99th=[12387] 00:17:48.635 bw ( KiB/s): min=36768, max=37496, per=100.00%, avg=37234.00, stdev=322.08, samples=4 00:17:48.635 iops : min= 9192, max= 9374, avg=9308.50, stdev=80.52, samples=4 00:17:48.635 lat (msec) : 4=0.08%, 10=99.78%, 20=0.14% 00:17:48.635 cpu : usr=69.29%, sys=23.38%, ctx=30, majf=0, minf=7 00:17:48.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:48.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:48.635 issued rwts: total=18665,18673,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.635 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.635 00:17:48.635 Run status group 0 (all jobs): 00:17:48.635 READ: bw=36.3MiB/s (38.1MB/s), 36.3MiB/s-36.3MiB/s (38.1MB/s-38.1MB/s), io=72.9MiB (76.5MB), run=2007-2007msec 00:17:48.635 WRITE: bw=36.3MiB/s (38.1MB/s), 36.3MiB/s-36.3MiB/s (38.1MB/s-38.1MB/s), io=72.9MiB (76.5MB), run=2007-2007msec 00:17:48.635 01:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:17:48.635 01:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:17:48.635 01:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:48.635 01:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:48.635 01:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:48.635 01:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:48.635 01:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:17:48.635 01:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:48.635 01:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:48.635 01:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:48.635 01:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:17:48.635 01:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:48.635 01:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:48.635 01:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:48.635 01:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:48.635 01:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:48.635 01:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:48.635 01:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:17:48.635 01:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:48.635 01:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:48.635 01:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:48.635 01:58:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:17:48.635 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:17:48.635 fio-3.35 00:17:48.635 Starting 1 thread 00:17:51.163 00:17:51.163 test: (groupid=0, jobs=1): err= 0: pid=89351: Tue Nov 19 01:59:01 2024 00:17:51.163 read: IOPS=8422, BW=132MiB/s (138MB/s)(264MiB/2008msec) 00:17:51.163 slat (usec): min=2, max=130, avg= 3.71, stdev= 2.32 00:17:51.163 clat (usec): min=2268, max=16353, avg=8310.26, stdev=2474.14 00:17:51.163 lat (usec): min=2271, max=16356, avg=8313.98, stdev=2474.21 00:17:51.163 clat percentiles (usec): 00:17:51.163 | 1.00th=[ 3916], 5.00th=[ 4686], 10.00th=[ 5276], 20.00th=[ 6128], 00:17:51.163 | 30.00th=[ 6783], 40.00th=[ 7439], 50.00th=[ 8094], 60.00th=[ 8717], 00:17:51.163 | 70.00th=[ 9634], 80.00th=[10290], 90.00th=[11731], 95.00th=[12911], 00:17:51.163 | 99.00th=[14746], 99.50th=[15139], 99.90th=[15533], 99.95th=[15926], 00:17:51.163 | 99.99th=[16319] 00:17:51.163 bw ( KiB/s): min=65824, max=74496, per=51.65%, avg=69600.00, stdev=4091.58, samples=4 00:17:51.163 iops : min= 4114, max= 4656, avg=4350.00, stdev=255.72, samples=4 00:17:51.163 write: IOPS=4898, BW=76.5MiB/s (80.3MB/s)(142MiB/1858msec); 0 zone resets 00:17:51.163 slat (usec): min=32, max=353, avg=38.37, stdev= 9.28 00:17:51.163 clat (usec): min=7091, max=24987, avg=11939.76, stdev=2289.29 00:17:51.163 lat (usec): min=7125, max=25024, avg=11978.12, stdev=2290.51 00:17:51.163 clat percentiles (usec): 00:17:51.163 | 1.00th=[ 8029], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[ 9896], 00:17:51.163 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11600], 60.00th=[12256], 00:17:51.163 | 70.00th=[13042], 80.00th=[13829], 90.00th=[14877], 95.00th=[16188], 00:17:51.163 | 99.00th=[18220], 99.50th=[19006], 99.90th=[21365], 99.95th=[24511], 00:17:51.163 | 99.99th=[25035] 00:17:51.163 bw ( KiB/s): min=68608, max=77312, per=92.03%, avg=72136.00, stdev=3983.49, samples=4 00:17:51.163 iops : min= 4288, max= 4832, avg=4508.50, stdev=248.97, samples=4 00:17:51.163 lat (msec) : 4=0.80%, 10=56.19%, 20=42.92%, 50=0.09% 00:17:51.163 cpu : usr=80.12%, sys=15.94%, ctx=2, majf=0, minf=3 00:17:51.163 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:17:51.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:51.163 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:51.163 issued rwts: total=16912,9102,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:51.163 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:51.163 00:17:51.163 Run status group 0 (all jobs): 00:17:51.163 READ: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=264MiB (277MB), run=2008-2008msec 00:17:51.163 WRITE: bw=76.5MiB/s (80.3MB/s), 76.5MiB/s-76.5MiB/s (80.3MB/s-80.3MB/s), io=142MiB (149MB), run=1858-1858msec 00:17:51.163 01:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:51.422 01:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:17:51.422 01:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:17:51.422 01:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:17:51.422 01:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:17:51.422 01:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:17:51.422 01:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:17:51.422 01:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:51.422 01:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:17:51.422 01:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:17:51.422 01:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:17:51.422 01:59:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.3 00:17:51.680 Nvme0n1 00:17:51.680 01:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:17:51.939 01:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=e7ee9d28-ea8f-441e-90f7-1e913ba0a498 00:17:51.939 01:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb e7ee9d28-ea8f-441e-90f7-1e913ba0a498 00:17:51.939 01:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=e7ee9d28-ea8f-441e-90f7-1e913ba0a498 00:17:51.939 01:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:17:51.939 01:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:17:51.939 01:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:17:51.939 01:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:52.198 01:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:17:52.198 { 00:17:52.198 "uuid": "e7ee9d28-ea8f-441e-90f7-1e913ba0a498", 00:17:52.198 "name": "lvs_0", 00:17:52.198 "base_bdev": "Nvme0n1", 00:17:52.198 "total_data_clusters": 4, 00:17:52.198 "free_clusters": 4, 00:17:52.198 "block_size": 4096, 00:17:52.198 "cluster_size": 1073741824 00:17:52.198 } 00:17:52.198 ]' 00:17:52.198 01:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="e7ee9d28-ea8f-441e-90f7-1e913ba0a498") .free_clusters' 00:17:52.198 01:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=4 00:17:52.198 01:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="e7ee9d28-ea8f-441e-90f7-1e913ba0a498") .cluster_size' 00:17:52.457 4096 00:17:52.457 01:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:17:52.457 01:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=4096 00:17:52.457 01:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 4096 00:17:52.457 01:59:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:17:52.457 1ee0d9d3-63e1-4117-88a6-a55eb9e73f78 00:17:52.715 01:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:17:52.715 01:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:17:52.974 01:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:17:53.233 01:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:53.233 01:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:53.233 01:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:53.233 01:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:53.233 01:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:53.233 01:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:53.233 01:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:17:53.233 01:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:53.233 01:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:53.233 01:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:53.233 01:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:53.233 01:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:17:53.233 01:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:53.233 01:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:53.233 01:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:53.233 01:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:53.233 01:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:17:53.233 01:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:53.233 01:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:53.233 01:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:53.233 01:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:53.233 01:59:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:53.492 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:53.492 fio-3.35 00:17:53.492 Starting 1 thread 00:17:56.025 00:17:56.025 test: (groupid=0, jobs=1): err= 0: pid=89464: Tue Nov 19 01:59:06 2024 00:17:56.025 read: IOPS=6172, BW=24.1MiB/s (25.3MB/s)(48.4MiB/2009msec) 00:17:56.025 slat (nsec): min=1925, max=335418, avg=2579.59, stdev=3859.51 00:17:56.025 clat (usec): min=2954, max=19096, avg=10855.89, stdev=899.11 00:17:56.025 lat (usec): min=2962, max=19098, avg=10858.47, stdev=898.81 00:17:56.025 clat percentiles (usec): 00:17:56.025 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10159], 00:17:56.025 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[11076], 00:17:56.025 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12125], 00:17:56.025 | 99.00th=[12780], 99.50th=[13304], 99.90th=[17433], 99.95th=[18220], 00:17:56.025 | 99.99th=[19006] 00:17:56.025 bw ( KiB/s): min=23696, max=25184, per=99.89%, avg=24664.00, stdev=660.76, samples=4 00:17:56.025 iops : min= 5924, max= 6296, avg=6166.00, stdev=165.19, samples=4 00:17:56.025 write: IOPS=6157, BW=24.1MiB/s (25.2MB/s)(48.3MiB/2009msec); 0 zone resets 00:17:56.025 slat (usec): min=2, max=486, avg= 2.66, stdev= 4.63 00:17:56.025 clat (usec): min=2551, max=18149, avg=9825.87, stdev=837.35 00:17:56.025 lat (usec): min=2564, max=18152, avg=9828.54, stdev=837.23 00:17:56.025 clat percentiles (usec): 00:17:56.025 | 1.00th=[ 8029], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9241], 00:17:56.025 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10028], 00:17:56.025 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10814], 95.00th=[11076], 00:17:56.025 | 99.00th=[11600], 99.50th=[11994], 99.90th=[16057], 99.95th=[17171], 00:17:56.025 | 99.99th=[17957] 00:17:56.025 bw ( KiB/s): min=24512, max=24712, per=99.98%, avg=24626.00, stdev=83.23, samples=4 00:17:56.025 iops : min= 6128, max= 6178, avg=6156.50, stdev=20.81, samples=4 00:17:56.025 lat (msec) : 4=0.06%, 10=36.39%, 20=63.55% 00:17:56.025 cpu : usr=75.40%, sys=19.62%, ctx=27, majf=0, minf=7 00:17:56.025 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:17:56.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:56.025 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:56.025 issued rwts: total=12401,12371,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:56.025 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:56.025 00:17:56.025 Run status group 0 (all jobs): 00:17:56.025 READ: bw=24.1MiB/s (25.3MB/s), 24.1MiB/s-24.1MiB/s (25.3MB/s-25.3MB/s), io=48.4MiB (50.8MB), run=2009-2009msec 00:17:56.025 WRITE: bw=24.1MiB/s (25.2MB/s), 24.1MiB/s-24.1MiB/s (25.2MB/s-25.2MB/s), io=48.3MiB (50.7MB), run=2009-2009msec 00:17:56.025 01:59:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:17:56.025 01:59:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:17:56.284 01:59:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=6c46c88a-f49c-4663-ad59-51e336e4537c 00:17:56.284 01:59:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 6c46c88a-f49c-4663-ad59-51e336e4537c 00:17:56.284 01:59:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=6c46c88a-f49c-4663-ad59-51e336e4537c 00:17:56.284 01:59:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:17:56.284 01:59:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:17:56.284 01:59:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:17:56.284 01:59:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:56.543 01:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:17:56.543 { 00:17:56.543 "uuid": "e7ee9d28-ea8f-441e-90f7-1e913ba0a498", 00:17:56.543 "name": "lvs_0", 00:17:56.543 "base_bdev": "Nvme0n1", 00:17:56.543 "total_data_clusters": 4, 00:17:56.543 "free_clusters": 0, 00:17:56.543 "block_size": 4096, 00:17:56.543 "cluster_size": 1073741824 00:17:56.543 }, 00:17:56.543 { 00:17:56.543 "uuid": "6c46c88a-f49c-4663-ad59-51e336e4537c", 00:17:56.543 "name": "lvs_n_0", 00:17:56.543 "base_bdev": "1ee0d9d3-63e1-4117-88a6-a55eb9e73f78", 00:17:56.543 "total_data_clusters": 1022, 00:17:56.543 "free_clusters": 1022, 00:17:56.543 "block_size": 4096, 00:17:56.543 "cluster_size": 4194304 00:17:56.543 } 00:17:56.543 ]' 00:17:56.543 01:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="6c46c88a-f49c-4663-ad59-51e336e4537c") .free_clusters' 00:17:56.803 01:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=1022 00:17:56.803 01:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="6c46c88a-f49c-4663-ad59-51e336e4537c") .cluster_size' 00:17:56.803 01:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:17:56.803 01:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=4088 00:17:56.803 01:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 4088 00:17:56.803 4088 00:17:56.803 01:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:17:57.061 be3a59d5-8349-4db9-9ef5-e626ccf82496 00:17:57.061 01:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:17:57.320 01:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:17:57.580 01:59:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:17:57.839 01:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:57.839 01:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:57.839 01:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:57.839 01:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:57.839 01:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:57.839 01:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:57.839 01:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:17:57.839 01:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:57.839 01:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:57.839 01:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:57.839 01:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:17:57.839 01:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:57.839 01:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:57.839 01:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:57.839 01:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:57.839 01:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:57.839 01:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:17:57.839 01:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:57.839 01:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:57.839 01:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:57.839 01:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:57.839 01:59:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:57.839 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:57.839 fio-3.35 00:17:57.839 Starting 1 thread 00:18:00.375 00:18:00.375 test: (groupid=0, jobs=1): err= 0: pid=89539: Tue Nov 19 01:59:10 2024 00:18:00.375 read: IOPS=5575, BW=21.8MiB/s (22.8MB/s)(43.8MiB/2010msec) 00:18:00.375 slat (nsec): min=1934, max=320957, avg=2700.77, stdev=4068.43 00:18:00.375 clat (usec): min=3380, max=20141, avg=12073.51, stdev=1006.03 00:18:00.375 lat (usec): min=3389, max=20143, avg=12076.21, stdev=1005.67 00:18:00.375 clat percentiles (usec): 00:18:00.375 | 1.00th=[ 9896], 5.00th=[10683], 10.00th=[10945], 20.00th=[11338], 00:18:00.375 | 30.00th=[11600], 40.00th=[11863], 50.00th=[11994], 60.00th=[12256], 00:18:00.375 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13304], 95.00th=[13698], 00:18:00.375 | 99.00th=[14353], 99.50th=[14746], 99.90th=[18482], 99.95th=[19792], 00:18:00.375 | 99.99th=[20055] 00:18:00.375 bw ( KiB/s): min=21520, max=22832, per=99.89%, avg=22276.00, stdev=564.84, samples=4 00:18:00.375 iops : min= 5380, max= 5708, avg=5569.00, stdev=141.21, samples=4 00:18:00.375 write: IOPS=5542, BW=21.6MiB/s (22.7MB/s)(43.5MiB/2010msec); 0 zone resets 00:18:00.375 slat (usec): min=2, max=225, avg= 2.72, stdev= 2.84 00:18:00.375 clat (usec): min=2146, max=21279, avg=10895.07, stdev=971.27 00:18:00.375 lat (usec): min=2158, max=21281, avg=10897.80, stdev=971.09 00:18:00.375 clat percentiles (usec): 00:18:00.375 | 1.00th=[ 8848], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10159], 00:18:00.375 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:18:00.375 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12256], 00:18:00.375 | 99.00th=[13042], 99.50th=[13435], 99.90th=[18220], 99.95th=[19792], 00:18:00.375 | 99.99th=[21365] 00:18:00.375 bw ( KiB/s): min=21888, max=22488, per=99.97%, avg=22162.00, stdev=253.69, samples=4 00:18:00.375 iops : min= 5472, max= 5622, avg=5540.50, stdev=63.42, samples=4 00:18:00.375 lat (msec) : 4=0.06%, 10=7.68%, 20=92.24%, 50=0.02% 00:18:00.375 cpu : usr=76.61%, sys=18.62%, ctx=19, majf=0, minf=7 00:18:00.375 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:18:00.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:00.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:00.375 issued rwts: total=11206,11140,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:00.375 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:00.375 00:18:00.375 Run status group 0 (all jobs): 00:18:00.375 READ: bw=21.8MiB/s (22.8MB/s), 21.8MiB/s-21.8MiB/s (22.8MB/s-22.8MB/s), io=43.8MiB (45.9MB), run=2010-2010msec 00:18:00.375 WRITE: bw=21.6MiB/s (22.7MB/s), 21.6MiB/s-21.6MiB/s (22.7MB/s-22.7MB/s), io=43.5MiB (45.6MB), run=2010-2010msec 00:18:00.375 01:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:18:00.375 01:59:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:18:00.634 01:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:18:00.893 01:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:18:01.152 01:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:18:01.410 01:59:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:18:01.669 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:02.237 rmmod nvme_tcp 00:18:02.237 rmmod nvme_fabrics 00:18:02.237 rmmod nvme_keyring 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 89238 ']' 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 89238 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 89238 ']' 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 89238 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89238 00:18:02.237 killing process with pid 89238 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89238' 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 89238 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 89238 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:02.237 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:02.497 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:02.497 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:02.497 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:02.497 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:02.497 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:02.497 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:02.497 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:02.497 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:02.497 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:02.497 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:02.497 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.497 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:02.497 01:59:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.497 01:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:18:02.497 00:18:02.497 real 0m19.110s 00:18:02.497 user 1m23.853s 00:18:02.497 sys 0m4.342s 00:18:02.497 ************************************ 00:18:02.497 END TEST nvmf_fio_host 00:18:02.497 ************************************ 00:18:02.497 01:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:02.497 01:59:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.497 01:59:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:02.497 01:59:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:02.497 01:59:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.497 01:59:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.497 ************************************ 00:18:02.497 START TEST nvmf_failover 00:18:02.497 ************************************ 00:18:02.497 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:02.758 * Looking for test storage... 00:18:02.758 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:02.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.758 --rc genhtml_branch_coverage=1 00:18:02.758 --rc genhtml_function_coverage=1 00:18:02.758 --rc genhtml_legend=1 00:18:02.758 --rc geninfo_all_blocks=1 00:18:02.758 --rc geninfo_unexecuted_blocks=1 00:18:02.758 00:18:02.758 ' 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:02.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.758 --rc genhtml_branch_coverage=1 00:18:02.758 --rc genhtml_function_coverage=1 00:18:02.758 --rc genhtml_legend=1 00:18:02.758 --rc geninfo_all_blocks=1 00:18:02.758 --rc geninfo_unexecuted_blocks=1 00:18:02.758 00:18:02.758 ' 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:02.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.758 --rc genhtml_branch_coverage=1 00:18:02.758 --rc genhtml_function_coverage=1 00:18:02.758 --rc genhtml_legend=1 00:18:02.758 --rc geninfo_all_blocks=1 00:18:02.758 --rc geninfo_unexecuted_blocks=1 00:18:02.758 00:18:02.758 ' 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:02.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.758 --rc genhtml_branch_coverage=1 00:18:02.758 --rc genhtml_function_coverage=1 00:18:02.758 --rc genhtml_legend=1 00:18:02.758 --rc geninfo_all_blocks=1 00:18:02.758 --rc geninfo_unexecuted_blocks=1 00:18:02.758 00:18:02.758 ' 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:18:02.758 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:02.759 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:02.759 Cannot find device "nvmf_init_br" 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:02.759 Cannot find device "nvmf_init_br2" 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:02.759 Cannot find device "nvmf_tgt_br" 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:02.759 Cannot find device "nvmf_tgt_br2" 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:02.759 Cannot find device "nvmf_init_br" 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:02.759 Cannot find device "nvmf_init_br2" 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:18:02.759 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:03.019 Cannot find device "nvmf_tgt_br" 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:03.019 Cannot find device "nvmf_tgt_br2" 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:03.019 Cannot find device "nvmf_br" 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:03.019 Cannot find device "nvmf_init_if" 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:03.019 Cannot find device "nvmf_init_if2" 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:03.019 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:03.019 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:03.019 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:03.019 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:18:03.019 00:18:03.019 --- 10.0.0.3 ping statistics --- 00:18:03.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.019 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:18:03.019 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:03.279 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:03.279 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:18:03.279 00:18:03.279 --- 10.0.0.4 ping statistics --- 00:18:03.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.279 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:18:03.279 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:03.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:03.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:18:03.279 00:18:03.279 --- 10.0.0.1 ping statistics --- 00:18:03.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.279 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:18:03.279 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:03.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:03.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:18:03.279 00:18:03.279 --- 10.0.0.2 ping statistics --- 00:18:03.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.279 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:18:03.279 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:03.279 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:18:03.279 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:03.279 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:03.279 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:03.279 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:03.279 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:03.279 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:03.279 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:03.279 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:18:03.279 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:03.279 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:03.279 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:03.279 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=89835 00:18:03.279 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:03.279 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 89835 00:18:03.279 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 89835 ']' 00:18:03.279 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.279 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:03.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.279 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.279 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:03.279 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:03.279 [2024-11-19 01:59:13.736418] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:18:03.279 [2024-11-19 01:59:13.736521] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.279 [2024-11-19 01:59:13.882715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:03.539 [2024-11-19 01:59:13.904736] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:03.539 [2024-11-19 01:59:13.905072] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:03.539 [2024-11-19 01:59:13.905094] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:03.539 [2024-11-19 01:59:13.905104] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:03.539 [2024-11-19 01:59:13.905113] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:03.539 [2024-11-19 01:59:13.905867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.539 [2024-11-19 01:59:13.906563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:03.539 [2024-11-19 01:59:13.906566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:03.539 [2024-11-19 01:59:13.939460] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:03.539 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:03.539 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:18:03.539 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:03.539 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:03.539 01:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:03.539 01:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:03.539 01:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:03.798 [2024-11-19 01:59:14.297937] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:03.798 01:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:04.058 Malloc0 00:18:04.058 01:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:04.317 01:59:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:04.576 01:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:04.835 [2024-11-19 01:59:15.367242] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:04.835 01:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:05.094 [2024-11-19 01:59:15.631498] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:05.094 01:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:05.353 [2024-11-19 01:59:15.899749] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:18:05.353 01:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:18:05.353 01:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=89885 00:18:05.353 01:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:05.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:05.353 01:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 89885 /var/tmp/bdevperf.sock 00:18:05.353 01:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 89885 ']' 00:18:05.353 01:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:05.353 01:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:05.353 01:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:05.353 01:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:05.353 01:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:05.611 01:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:05.611 01:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:18:05.611 01:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:06.179 NVMe0n1 00:18:06.179 01:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:06.448 00:18:06.448 01:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=89901 00:18:06.448 01:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:06.448 01:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:18:07.396 01:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:07.655 01:59:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:18:10.941 01:59:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:10.941 00:18:10.941 01:59:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:11.199 01:59:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:18:14.487 01:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:14.487 [2024-11-19 01:59:24.985944] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:14.487 01:59:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:18:15.423 01:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:15.682 01:59:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 89901 00:18:22.255 { 00:18:22.255 "results": [ 00:18:22.255 { 00:18:22.255 "job": "NVMe0n1", 00:18:22.255 "core_mask": "0x1", 00:18:22.255 "workload": "verify", 00:18:22.255 "status": "finished", 00:18:22.255 "verify_range": { 00:18:22.255 "start": 0, 00:18:22.255 "length": 16384 00:18:22.255 }, 00:18:22.255 "queue_depth": 128, 00:18:22.255 "io_size": 4096, 00:18:22.255 "runtime": 15.011063, 00:18:22.255 "iops": 9354.767214020752, 00:18:22.255 "mibps": 36.542059429768564, 00:18:22.255 "io_failed": 3677, 00:18:22.255 "io_timeout": 0, 00:18:22.255 "avg_latency_us": 13303.292328741889, 00:18:22.255 "min_latency_us": 554.8218181818182, 00:18:22.255 "max_latency_us": 16920.203636363636 00:18:22.255 } 00:18:22.255 ], 00:18:22.255 "core_count": 1 00:18:22.255 } 00:18:22.255 01:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 89885 00:18:22.255 01:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 89885 ']' 00:18:22.255 01:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 89885 00:18:22.255 01:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:22.255 01:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:22.255 01:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89885 00:18:22.255 killing process with pid 89885 00:18:22.255 01:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:22.255 01:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:22.255 01:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89885' 00:18:22.255 01:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 89885 00:18:22.255 01:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 89885 00:18:22.255 01:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:22.255 [2024-11-19 01:59:15.974918] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:18:22.255 [2024-11-19 01:59:15.975026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89885 ] 00:18:22.255 [2024-11-19 01:59:16.126377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.255 [2024-11-19 01:59:16.146943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.255 [2024-11-19 01:59:16.175865] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:22.255 Running I/O for 15 seconds... 00:18:22.255 7317.00 IOPS, 28.58 MiB/s [2024-11-19T01:59:32.870Z] [2024-11-19 01:59:18.126439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.255 [2024-11-19 01:59:18.126523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.255 [2024-11-19 01:59:18.126554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.255 [2024-11-19 01:59:18.126569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.255 [2024-11-19 01:59:18.126584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.255 [2024-11-19 01:59:18.126597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.255 [2024-11-19 01:59:18.126611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.255 [2024-11-19 01:59:18.126624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.255 [2024-11-19 01:59:18.126638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.255 [2024-11-19 01:59:18.126650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.255 [2024-11-19 01:59:18.126664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.255 [2024-11-19 01:59:18.126677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.255 [2024-11-19 01:59:18.126691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.255 [2024-11-19 01:59:18.126704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.255 [2024-11-19 01:59:18.126717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.255 [2024-11-19 01:59:18.126730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.255 [2024-11-19 01:59:18.126744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.255 [2024-11-19 01:59:18.126757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.255 [2024-11-19 01:59:18.126770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.255 [2024-11-19 01:59:18.126783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.255 [2024-11-19 01:59:18.126796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.255 [2024-11-19 01:59:18.126834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.255 [2024-11-19 01:59:18.126851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.255 [2024-11-19 01:59:18.126864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.255 [2024-11-19 01:59:18.126878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.255 [2024-11-19 01:59:18.126891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.255 [2024-11-19 01:59:18.126905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.255 [2024-11-19 01:59:18.126918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.255 [2024-11-19 01:59:18.126932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.255 [2024-11-19 01:59:18.126946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.255 [2024-11-19 01:59:18.126960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.255 [2024-11-19 01:59:18.126973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.255 [2024-11-19 01:59:18.126987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.255 [2024-11-19 01:59:18.126999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.255 [2024-11-19 01:59:18.127014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.255 [2024-11-19 01:59:18.127026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.255 [2024-11-19 01:59:18.127040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.255 [2024-11-19 01:59:18.127053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.255 [2024-11-19 01:59:18.127067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.255 [2024-11-19 01:59:18.127080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.127094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.256 [2024-11-19 01:59:18.127107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.127121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.256 [2024-11-19 01:59:18.127134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.127148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.256 [2024-11-19 01:59:18.127161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.127182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.256 [2024-11-19 01:59:18.127196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.127212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.256 [2024-11-19 01:59:18.127226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.127240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.256 [2024-11-19 01:59:18.127253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.127267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.256 [2024-11-19 01:59:18.127280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.127295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.256 [2024-11-19 01:59:18.127308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.127322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.256 [2024-11-19 01:59:18.127335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.127350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.256 [2024-11-19 01:59:18.127363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.127377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.256 [2024-11-19 01:59:18.127390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.127404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.256 [2024-11-19 01:59:18.127417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.127431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.256 [2024-11-19 01:59:18.127444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.127458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.256 [2024-11-19 01:59:18.127471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.127485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.256 [2024-11-19 01:59:18.127506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.127524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.256 [2024-11-19 01:59:18.127547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.127563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.256 [2024-11-19 01:59:18.127576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.127590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.256 [2024-11-19 01:59:18.127603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.127618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.256 [2024-11-19 01:59:18.127631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.127645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.256 [2024-11-19 01:59:18.127658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.127673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.256 [2024-11-19 01:59:18.127686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.127700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.256 [2024-11-19 01:59:18.127713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.127727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.256 [2024-11-19 01:59:18.127740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.127754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:65904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.256 [2024-11-19 01:59:18.127767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.127781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.256 [2024-11-19 01:59:18.127794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.127808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.256 [2024-11-19 01:59:18.127820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.127835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.256 [2024-11-19 01:59:18.127848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.127863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.256 [2024-11-19 01:59:18.127876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.127890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.256 [2024-11-19 01:59:18.127909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.127924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.256 [2024-11-19 01:59:18.127937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.127951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.256 [2024-11-19 01:59:18.127964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.127978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.256 [2024-11-19 01:59:18.127991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.128005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.256 [2024-11-19 01:59:18.128018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.128032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.256 [2024-11-19 01:59:18.128045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.128059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.256 [2024-11-19 01:59:18.128071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.128085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.256 [2024-11-19 01:59:18.128099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.128113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.256 [2024-11-19 01:59:18.128126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.128140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.256 [2024-11-19 01:59:18.128154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.256 [2024-11-19 01:59:18.128168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.256 [2024-11-19 01:59:18.128181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.128195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.128207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.128221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.128234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.128255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.128269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.128283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.128297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.128315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:66064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.128329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.128343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:66072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.128356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.128371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.128384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.128399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:66088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.128411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.128426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:66096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.128439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.128453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:66104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.128466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.128481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:66112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.128493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.128518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.128532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.128546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:66128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.128559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.128573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:66136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.128586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.128600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:66144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.128620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.128635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.128648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.128662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:66160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.128675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.128689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.128702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.128716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.128728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.128742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:66184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.128755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.128772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:66192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.128785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.128799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:66200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.128812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.128826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.128838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.128852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:66216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.128865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.128880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:66224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.128892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.128906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.128920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.128933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.128946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.128966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:66248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.128980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.128994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.129007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.129022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:66264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.129035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.129049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.129062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.129076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.129089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.129103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.129116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.129130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.129143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.129157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:66304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.129169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.129184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:66312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.129196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.129212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:66320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.129225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.129239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.129252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.129266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.257 [2024-11-19 01:59:18.129279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.257 [2024-11-19 01:59:18.129293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:66344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.258 [2024-11-19 01:59:18.129311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.258 [2024-11-19 01:59:18.129326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:66352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.258 [2024-11-19 01:59:18.129339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.258 [2024-11-19 01:59:18.129353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.258 [2024-11-19 01:59:18.129366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.258 [2024-11-19 01:59:18.129380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.258 [2024-11-19 01:59:18.129392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.258 [2024-11-19 01:59:18.129407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.258 [2024-11-19 01:59:18.129420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.258 [2024-11-19 01:59:18.129433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:66384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.258 [2024-11-19 01:59:18.129446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.258 [2024-11-19 01:59:18.129461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:66392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.258 [2024-11-19 01:59:18.129474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.258 [2024-11-19 01:59:18.129488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.258 [2024-11-19 01:59:18.129510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.258 [2024-11-19 01:59:18.129526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:66408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.258 [2024-11-19 01:59:18.129572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.258 [2024-11-19 01:59:18.129588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.258 [2024-11-19 01:59:18.129602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.258 [2024-11-19 01:59:18.129617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.258 [2024-11-19 01:59:18.129631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.258 [2024-11-19 01:59:18.129646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:66432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.258 [2024-11-19 01:59:18.129659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.258 [2024-11-19 01:59:18.129674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:66440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.258 [2024-11-19 01:59:18.129688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.258 [2024-11-19 01:59:18.129705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:66448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.258 [2024-11-19 01:59:18.129725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.258 [2024-11-19 01:59:18.129741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:66456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.258 [2024-11-19 01:59:18.129755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.258 [2024-11-19 01:59:18.129770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:66464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.258 [2024-11-19 01:59:18.129784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.258 [2024-11-19 01:59:18.129799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:66472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.258 [2024-11-19 01:59:18.129812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.258 [2024-11-19 01:59:18.129827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:66480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.258 [2024-11-19 01:59:18.129841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.258 [2024-11-19 01:59:18.129856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:66488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.258 [2024-11-19 01:59:18.129869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.258 [2024-11-19 01:59:18.129884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:66496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.258 [2024-11-19 01:59:18.129925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.258 [2024-11-19 01:59:18.129942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:66504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.258 [2024-11-19 01:59:18.129957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.258 [2024-11-19 01:59:18.129973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:66512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.258 [2024-11-19 01:59:18.129987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.258 [2024-11-19 01:59:18.130005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:66520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.258 [2024-11-19 01:59:18.130019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.258 [2024-11-19 01:59:18.130036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:66528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.258 [2024-11-19 01:59:18.130050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.258 [2024-11-19 01:59:18.130067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:66536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.258 [2024-11-19 01:59:18.130081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.258 [2024-11-19 01:59:18.130098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:66544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.258 [2024-11-19 01:59:18.130112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.258 [2024-11-19 01:59:18.130135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:66552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.258 [2024-11-19 01:59:18.130150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.258 [2024-11-19 01:59:18.130167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:66560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.258 [2024-11-19 01:59:18.130182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.258 [2024-11-19 01:59:18.130198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.258 [2024-11-19 01:59:18.130212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.258 [2024-11-19 01:59:18.130260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2419580 is same with the state(6) to be set 00:18:22.258 [2024-11-19 01:59:18.130277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.258 [2024-11-19 01:59:18.130287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.258 [2024-11-19 01:59:18.130297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66576 len:8 PRP1 0x0 PRP2 0x0 00:18:22.258 [2024-11-19 01:59:18.130310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.258 [2024-11-19 01:59:18.130357] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:18:22.258 [2024-11-19 01:59:18.130412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.258 [2024-11-19 01:59:18.130434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.258 [2024-11-19 01:59:18.130449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.258 [2024-11-19 01:59:18.130462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.258 [2024-11-19 01:59:18.130475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.258 [2024-11-19 01:59:18.130488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.258 [2024-11-19 01:59:18.130502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.258 [2024-11-19 01:59:18.130514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.258 [2024-11-19 01:59:18.130528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:22.258 [2024-11-19 01:59:18.134191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:22.258 [2024-11-19 01:59:18.134243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f69c0 (9): Bad file descriptor 00:18:22.258 [2024-11-19 01:59:18.162778] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:18:22.258 8062.00 IOPS, 31.49 MiB/s [2024-11-19T01:59:32.873Z] 8361.33 IOPS, 32.66 MiB/s [2024-11-19T01:59:32.873Z] 8709.00 IOPS, 34.02 MiB/s [2024-11-19T01:59:32.873Z] [2024-11-19 01:59:21.738618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:83664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.258 [2024-11-19 01:59:21.738682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.258 [2024-11-19 01:59:21.738732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.258 [2024-11-19 01:59:21.738748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.259 [2024-11-19 01:59:21.738764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.259 [2024-11-19 01:59:21.738777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.259 [2024-11-19 01:59:21.738791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.259 [2024-11-19 01:59:21.738804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.259 [2024-11-19 01:59:21.738818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.259 [2024-11-19 01:59:21.738830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.259 [2024-11-19 01:59:21.738845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.259 [2024-11-19 01:59:21.738857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.259 [2024-11-19 01:59:21.738872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.259 [2024-11-19 01:59:21.738884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.259 [2024-11-19 01:59:21.738899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.259 [2024-11-19 01:59:21.738927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.259 [2024-11-19 01:59:21.738958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.259 [2024-11-19 01:59:21.738972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.259 [2024-11-19 01:59:21.738987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.259 [2024-11-19 01:59:21.739001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.259 [2024-11-19 01:59:21.739016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.259 [2024-11-19 01:59:21.739029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.259 [2024-11-19 01:59:21.739044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.259 [2024-11-19 01:59:21.739058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.259 [2024-11-19 01:59:21.739073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.259 [2024-11-19 01:59:21.739086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.259 [2024-11-19 01:59:21.739117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.259 [2024-11-19 01:59:21.739139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.259 [2024-11-19 01:59:21.739155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.259 [2024-11-19 01:59:21.739169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.259 [2024-11-19 01:59:21.739184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.259 [2024-11-19 01:59:21.739198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.259 [2024-11-19 01:59:21.739213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.259 [2024-11-19 01:59:21.739227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.259 [2024-11-19 01:59:21.739245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.259 [2024-11-19 01:59:21.739266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.259 [2024-11-19 01:59:21.739281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.259 [2024-11-19 01:59:21.739295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.259 [2024-11-19 01:59:21.739310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.259 [2024-11-19 01:59:21.739339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.259 [2024-11-19 01:59:21.739369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.259 [2024-11-19 01:59:21.739382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.259 [2024-11-19 01:59:21.739396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.259 [2024-11-19 01:59:21.739409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.259 [2024-11-19 01:59:21.739423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.259 [2024-11-19 01:59:21.739436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.259 [2024-11-19 01:59:21.739450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.259 [2024-11-19 01:59:21.739463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.259 [2024-11-19 01:59:21.739477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.259 [2024-11-19 01:59:21.739490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.259 [2024-11-19 01:59:21.739504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.259 [2024-11-19 01:59:21.739517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.259 [2024-11-19 01:59:21.739539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.259 [2024-11-19 01:59:21.739554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.259 [2024-11-19 01:59:21.739568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.259 [2024-11-19 01:59:21.739597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.259 [2024-11-19 01:59:21.739613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.259 [2024-11-19 01:59:21.739626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.259 [2024-11-19 01:59:21.739641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.259 [2024-11-19 01:59:21.739654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.259 [2024-11-19 01:59:21.739668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.259 [2024-11-19 01:59:21.739681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.259 [2024-11-19 01:59:21.739696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.259 [2024-11-19 01:59:21.739709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.259 [2024-11-19 01:59:21.739724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.259 [2024-11-19 01:59:21.739736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.259 [2024-11-19 01:59:21.739752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.259 [2024-11-19 01:59:21.739765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.259 [2024-11-19 01:59:21.739779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.260 [2024-11-19 01:59:21.739792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.739806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.260 [2024-11-19 01:59:21.739819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.739834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.260 [2024-11-19 01:59:21.739847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.739862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.260 [2024-11-19 01:59:21.739874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.739889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.260 [2024-11-19 01:59:21.739902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.739923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.260 [2024-11-19 01:59:21.739937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.739952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.260 [2024-11-19 01:59:21.739965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.739979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.260 [2024-11-19 01:59:21.739992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.740006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.260 [2024-11-19 01:59:21.740019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.740034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.260 [2024-11-19 01:59:21.740046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.740061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.260 [2024-11-19 01:59:21.740074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.740089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.260 [2024-11-19 01:59:21.740102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.740116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.260 [2024-11-19 01:59:21.740129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.740144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.260 [2024-11-19 01:59:21.740157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.740171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.260 [2024-11-19 01:59:21.740184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.740199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.260 [2024-11-19 01:59:21.740211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.740226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.260 [2024-11-19 01:59:21.740239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.740253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.260 [2024-11-19 01:59:21.740272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.740287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.260 [2024-11-19 01:59:21.740301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.740315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.260 [2024-11-19 01:59:21.740328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.740342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.260 [2024-11-19 01:59:21.740355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.740369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.260 [2024-11-19 01:59:21.740382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.740397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.260 [2024-11-19 01:59:21.740410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.740425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.260 [2024-11-19 01:59:21.740437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.740452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.260 [2024-11-19 01:59:21.740465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.740479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.260 [2024-11-19 01:59:21.740492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.740518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.260 [2024-11-19 01:59:21.740533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.740548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.260 [2024-11-19 01:59:21.740560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.740575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.260 [2024-11-19 01:59:21.740588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.740604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.260 [2024-11-19 01:59:21.740617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.740638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.260 [2024-11-19 01:59:21.740651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.740666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.260 [2024-11-19 01:59:21.740678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.740693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.260 [2024-11-19 01:59:21.740706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.740720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.260 [2024-11-19 01:59:21.740733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.740748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.260 [2024-11-19 01:59:21.740761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.740775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.260 [2024-11-19 01:59:21.740788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.740802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.260 [2024-11-19 01:59:21.740816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.740830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.260 [2024-11-19 01:59:21.740843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.740858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.260 [2024-11-19 01:59:21.740872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.740886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.260 [2024-11-19 01:59:21.740900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.260 [2024-11-19 01:59:21.740914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.261 [2024-11-19 01:59:21.740927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.740942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.261 [2024-11-19 01:59:21.740954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.740969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.261 [2024-11-19 01:59:21.740988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.741003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.261 [2024-11-19 01:59:21.741016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.741030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.261 [2024-11-19 01:59:21.741044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.741063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.261 [2024-11-19 01:59:21.741077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.741091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.261 [2024-11-19 01:59:21.741104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.741118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.261 [2024-11-19 01:59:21.741131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.741145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.261 [2024-11-19 01:59:21.741158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.741173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.261 [2024-11-19 01:59:21.741185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.741200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.261 [2024-11-19 01:59:21.741213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.741227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.261 [2024-11-19 01:59:21.741240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.741255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.261 [2024-11-19 01:59:21.741268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.741282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.261 [2024-11-19 01:59:21.741295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.741310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.261 [2024-11-19 01:59:21.741323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.741343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.261 [2024-11-19 01:59:21.741357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.741372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.261 [2024-11-19 01:59:21.741384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.741399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.261 [2024-11-19 01:59:21.741412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.741426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.261 [2024-11-19 01:59:21.741439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.741454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.261 [2024-11-19 01:59:21.741466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.741480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.261 [2024-11-19 01:59:21.741494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.741523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.261 [2024-11-19 01:59:21.741537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.741552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.261 [2024-11-19 01:59:21.741565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.741579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.261 [2024-11-19 01:59:21.741592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.741606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.261 [2024-11-19 01:59:21.741619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.741634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.261 [2024-11-19 01:59:21.741647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.741661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.261 [2024-11-19 01:59:21.741674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.741688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.261 [2024-11-19 01:59:21.741701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.741722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.261 [2024-11-19 01:59:21.741736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.741750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.261 [2024-11-19 01:59:21.741763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.741781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.261 [2024-11-19 01:59:21.741794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.741809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.261 [2024-11-19 01:59:21.741822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.741837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.261 [2024-11-19 01:59:21.741849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.741863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.261 [2024-11-19 01:59:21.741876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.741915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.261 [2024-11-19 01:59:21.741932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.741947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.261 [2024-11-19 01:59:21.741960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.741975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.261 [2024-11-19 01:59:21.741989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.742006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.261 [2024-11-19 01:59:21.742020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.742034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.261 [2024-11-19 01:59:21.742048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.261 [2024-11-19 01:59:21.742063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.261 [2024-11-19 01:59:21.742076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.262 [2024-11-19 01:59:21.742090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.262 [2024-11-19 01:59:21.742110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.262 [2024-11-19 01:59:21.742126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.262 [2024-11-19 01:59:21.742139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.262 [2024-11-19 01:59:21.742154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.262 [2024-11-19 01:59:21.742167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.262 [2024-11-19 01:59:21.742182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.262 [2024-11-19 01:59:21.742195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.262 [2024-11-19 01:59:21.742210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.262 [2024-11-19 01:59:21.742238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.262 [2024-11-19 01:59:21.742253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.262 [2024-11-19 01:59:21.742265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.262 [2024-11-19 01:59:21.742282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.262 [2024-11-19 01:59:21.742295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.262 [2024-11-19 01:59:21.742310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.262 [2024-11-19 01:59:21.742323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.262 [2024-11-19 01:59:21.742337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.262 [2024-11-19 01:59:21.742350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.262 [2024-11-19 01:59:21.742365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.262 [2024-11-19 01:59:21.742378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.262 [2024-11-19 01:59:21.742392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.262 [2024-11-19 01:59:21.742405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.262 [2024-11-19 01:59:21.742419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.262 [2024-11-19 01:59:21.742432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.262 [2024-11-19 01:59:21.742447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.262 [2024-11-19 01:59:21.742460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.262 [2024-11-19 01:59:21.742515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.262 [2024-11-19 01:59:21.742542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.262 [2024-11-19 01:59:21.742555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84168 len:8 PRP1 0x0 PRP2 0x0 00:18:22.262 [2024-11-19 01:59:21.742568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.262 [2024-11-19 01:59:21.742614] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:18:22.262 [2024-11-19 01:59:21.742667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.262 [2024-11-19 01:59:21.742688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.262 [2024-11-19 01:59:21.742703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.262 [2024-11-19 01:59:21.742716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.262 [2024-11-19 01:59:21.742730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.262 [2024-11-19 01:59:21.742742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.262 [2024-11-19 01:59:21.742756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.262 [2024-11-19 01:59:21.742768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.262 [2024-11-19 01:59:21.742782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:18:22.262 [2024-11-19 01:59:21.742815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f69c0 (9): Bad file descriptor 00:18:22.262 [2024-11-19 01:59:21.746403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:18:22.262 [2024-11-19 01:59:21.775290] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:18:22.262 8840.40 IOPS, 34.53 MiB/s [2024-11-19T01:59:32.877Z] 9033.67 IOPS, 35.29 MiB/s [2024-11-19T01:59:32.877Z] 9131.71 IOPS, 35.67 MiB/s [2024-11-19T01:59:32.877Z] 9181.50 IOPS, 35.87 MiB/s [2024-11-19T01:59:32.877Z] 9217.11 IOPS, 36.00 MiB/s [2024-11-19T01:59:32.877Z] [2024-11-19 01:59:26.255981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.262 [2024-11-19 01:59:26.256064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.262 [2024-11-19 01:59:26.256086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.262 [2024-11-19 01:59:26.256101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.262 [2024-11-19 01:59:26.256116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.262 [2024-11-19 01:59:26.256130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.262 [2024-11-19 01:59:26.256144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:22.262 [2024-11-19 01:59:26.256157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.262 [2024-11-19 01:59:26.256171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f69c0 is same with the state(6) to be set 00:18:22.262 [2024-11-19 01:59:26.257679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.262 [2024-11-19 01:59:26.257715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.262 [2024-11-19 01:59:26.257738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:46856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.262 [2024-11-19 01:59:26.257752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.262 [2024-11-19 01:59:26.257766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.262 [2024-11-19 01:59:26.257779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.262 [2024-11-19 01:59:26.257793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.262 [2024-11-19 01:59:26.257805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.262 [2024-11-19 01:59:26.257819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.262 [2024-11-19 01:59:26.257831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.262 [2024-11-19 01:59:26.257845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:46888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.262 [2024-11-19 01:59:26.257857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.262 [2024-11-19 01:59:26.257871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.262 [2024-11-19 01:59:26.257883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.262 [2024-11-19 01:59:26.257923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:46904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.262 [2024-11-19 01:59:26.257938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.262 [2024-11-19 01:59:26.257955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.262 [2024-11-19 01:59:26.257969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.262 [2024-11-19 01:59:26.257985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:46920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.262 [2024-11-19 01:59:26.258000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.262 [2024-11-19 01:59:26.258016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.262 [2024-11-19 01:59:26.258030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.262 [2024-11-19 01:59:26.258046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:46936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.262 [2024-11-19 01:59:26.258060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.262 [2024-11-19 01:59:26.258076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:46528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.262 [2024-11-19 01:59:26.258104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.262 [2024-11-19 01:59:26.258122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:46536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.263 [2024-11-19 01:59:26.258136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.258152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:46544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.263 [2024-11-19 01:59:26.258167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.258183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:46552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.263 [2024-11-19 01:59:26.258213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.258229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:46560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.263 [2024-11-19 01:59:26.258243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.258258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.263 [2024-11-19 01:59:26.258287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.258316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.263 [2024-11-19 01:59:26.258329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.258343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:46584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.263 [2024-11-19 01:59:26.258355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.258369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.263 [2024-11-19 01:59:26.258383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.258397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:46952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.263 [2024-11-19 01:59:26.258410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.258424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.263 [2024-11-19 01:59:26.258436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.258450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.263 [2024-11-19 01:59:26.258462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.258476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.263 [2024-11-19 01:59:26.258489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.258509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.263 [2024-11-19 01:59:26.258522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.258552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.263 [2024-11-19 01:59:26.258564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.258591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:47000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.263 [2024-11-19 01:59:26.258605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.258619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:47008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.263 [2024-11-19 01:59:26.258632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.258646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.263 [2024-11-19 01:59:26.258659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.258672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.263 [2024-11-19 01:59:26.258685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.258699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:47032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.263 [2024-11-19 01:59:26.258711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.258725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:47040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.263 [2024-11-19 01:59:26.258738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.258752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.263 [2024-11-19 01:59:26.258764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.258778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.263 [2024-11-19 01:59:26.258791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.258805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.263 [2024-11-19 01:59:26.258818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.258831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:47072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.263 [2024-11-19 01:59:26.258845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.258859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.263 [2024-11-19 01:59:26.258871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.258893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.263 [2024-11-19 01:59:26.258907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.258935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.263 [2024-11-19 01:59:26.258948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.258977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.263 [2024-11-19 01:59:26.259007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.259022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.263 [2024-11-19 01:59:26.259036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.259052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.263 [2024-11-19 01:59:26.259066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.259081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:46600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.263 [2024-11-19 01:59:26.259095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.259110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.263 [2024-11-19 01:59:26.259124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.259140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:46616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.263 [2024-11-19 01:59:26.259153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.259169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:46624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.263 [2024-11-19 01:59:26.259183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.259198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.263 [2024-11-19 01:59:26.259212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.259227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.263 [2024-11-19 01:59:26.259241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.259256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.263 [2024-11-19 01:59:26.259271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.259287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.263 [2024-11-19 01:59:26.259307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.259338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.263 [2024-11-19 01:59:26.259382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.259396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:47136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.263 [2024-11-19 01:59:26.259409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.263 [2024-11-19 01:59:26.259423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.264 [2024-11-19 01:59:26.259435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.264 [2024-11-19 01:59:26.259448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.264 [2024-11-19 01:59:26.259460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.264 [2024-11-19 01:59:26.259474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.264 [2024-11-19 01:59:26.259486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.264 [2024-11-19 01:59:26.259500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.264 [2024-11-19 01:59:26.259512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.264 [2024-11-19 01:59:26.259525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.264 [2024-11-19 01:59:26.259538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.264 [2024-11-19 01:59:26.259552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.264 [2024-11-19 01:59:26.259564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.264 [2024-11-19 01:59:26.259577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:47192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.264 [2024-11-19 01:59:26.259600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.264 [2024-11-19 01:59:26.259615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.264 [2024-11-19 01:59:26.259627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.264 [2024-11-19 01:59:26.259641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.264 [2024-11-19 01:59:26.259654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.264 [2024-11-19 01:59:26.259668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.264 [2024-11-19 01:59:26.259680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.264 [2024-11-19 01:59:26.259701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.264 [2024-11-19 01:59:26.259714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.264 [2024-11-19 01:59:26.259728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.264 [2024-11-19 01:59:26.259740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.264 [2024-11-19 01:59:26.259753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.264 [2024-11-19 01:59:26.259766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.264 [2024-11-19 01:59:26.259779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.264 [2024-11-19 01:59:26.259792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.264 [2024-11-19 01:59:26.259806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:46656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.264 [2024-11-19 01:59:26.259818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.264 [2024-11-19 01:59:26.259832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.264 [2024-11-19 01:59:26.259844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.264 [2024-11-19 01:59:26.259858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.264 [2024-11-19 01:59:26.259871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.264 [2024-11-19 01:59:26.259884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.264 [2024-11-19 01:59:26.259896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.264 [2024-11-19 01:59:26.259910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.264 [2024-11-19 01:59:26.259922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.264 [2024-11-19 01:59:26.259936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.264 [2024-11-19 01:59:26.259949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.264 [2024-11-19 01:59:26.259980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.264 [2024-11-19 01:59:26.260010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.264 [2024-11-19 01:59:26.260026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:46712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.264 [2024-11-19 01:59:26.260041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.264 [2024-11-19 01:59:26.260057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.264 [2024-11-19 01:59:26.260077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.264 [2024-11-19 01:59:26.260094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.264 [2024-11-19 01:59:26.260109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.264 [2024-11-19 01:59:26.260125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.264 [2024-11-19 01:59:26.260139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.264 [2024-11-19 01:59:26.260155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.264 [2024-11-19 01:59:26.260169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.264 [2024-11-19 01:59:26.260185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.264 [2024-11-19 01:59:26.260199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.264 [2024-11-19 01:59:26.260216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.264 [2024-11-19 01:59:26.260231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.264 [2024-11-19 01:59:26.260247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.264 [2024-11-19 01:59:26.260262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.264 [2024-11-19 01:59:26.260277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.264 [2024-11-19 01:59:26.260292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.264 [2024-11-19 01:59:26.260307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.264 [2024-11-19 01:59:26.260352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.264 [2024-11-19 01:59:26.260380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.264 [2024-11-19 01:59:26.260393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.264 [2024-11-19 01:59:26.260407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.264 [2024-11-19 01:59:26.260419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.264 [2024-11-19 01:59:26.260433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.264 [2024-11-19 01:59:26.260445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.264 [2024-11-19 01:59:26.260459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:22.264 [2024-11-19 01:59:26.260471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.264 [2024-11-19 01:59:26.260485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:46720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.265 [2024-11-19 01:59:26.260502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.265 [2024-11-19 01:59:26.260517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:46728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.265 [2024-11-19 01:59:26.260529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.265 [2024-11-19 01:59:26.260542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:46736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.265 [2024-11-19 01:59:26.260555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.265 [2024-11-19 01:59:26.260568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.265 [2024-11-19 01:59:26.260590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.265 [2024-11-19 01:59:26.260605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.265 [2024-11-19 01:59:26.260617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.265 [2024-11-19 01:59:26.260631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:46760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.265 [2024-11-19 01:59:26.260643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.265 [2024-11-19 01:59:26.260657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.265 [2024-11-19 01:59:26.260669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.265 [2024-11-19 01:59:26.260683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:46776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.265 [2024-11-19 01:59:26.260695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.265 [2024-11-19 01:59:26.260709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:46784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.265 [2024-11-19 01:59:26.260722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.265 [2024-11-19 01:59:26.260735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.265 [2024-11-19 01:59:26.260747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.265 [2024-11-19 01:59:26.260761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:46800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.265 [2024-11-19 01:59:26.260777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.265 [2024-11-19 01:59:26.260792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:46808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.265 [2024-11-19 01:59:26.260804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.265 [2024-11-19 01:59:26.260818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:46816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.265 [2024-11-19 01:59:26.260830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.265 [2024-11-19 01:59:26.260851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:46824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.265 [2024-11-19 01:59:26.260864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.265 [2024-11-19 01:59:26.260878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:46832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.265 [2024-11-19 01:59:26.260890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.265 [2024-11-19 01:59:26.260903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24284a0 is same with the state(6) to be set 00:18:22.265 [2024-11-19 01:59:26.260918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.265 [2024-11-19 01:59:26.260927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.265 [2024-11-19 01:59:26.260937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46840 len:8 PRP1 0x0 PRP2 0x0 00:18:22.265 [2024-11-19 01:59:26.260949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.265 [2024-11-19 01:59:26.260977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.265 [2024-11-19 01:59:26.261003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.265 [2024-11-19 01:59:26.261013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47360 len:8 PRP1 0x0 PRP2 0x0 00:18:22.265 [2024-11-19 01:59:26.261027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.265 [2024-11-19 01:59:26.261041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.265 [2024-11-19 01:59:26.261050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.265 [2024-11-19 01:59:26.261060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47368 len:8 PRP1 0x0 PRP2 0x0 00:18:22.265 [2024-11-19 01:59:26.261073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.265 [2024-11-19 01:59:26.261087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.265 [2024-11-19 01:59:26.261097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.265 [2024-11-19 01:59:26.261107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47376 len:8 PRP1 0x0 PRP2 0x0 00:18:22.265 [2024-11-19 01:59:26.261120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.265 [2024-11-19 01:59:26.261134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.265 [2024-11-19 01:59:26.261144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.265 [2024-11-19 01:59:26.261154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47384 len:8 PRP1 0x0 PRP2 0x0 00:18:22.265 [2024-11-19 01:59:26.261167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.265 [2024-11-19 01:59:26.261181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.265 [2024-11-19 01:59:26.261190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.265 [2024-11-19 01:59:26.261203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47392 len:8 PRP1 0x0 PRP2 0x0 00:18:22.265 [2024-11-19 01:59:26.261216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.265 [2024-11-19 01:59:26.261236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.265 [2024-11-19 01:59:26.261247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.265 [2024-11-19 01:59:26.261258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47400 len:8 PRP1 0x0 PRP2 0x0 00:18:22.265 [2024-11-19 01:59:26.261271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.265 [2024-11-19 01:59:26.261285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.265 [2024-11-19 01:59:26.261295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.265 [2024-11-19 01:59:26.261305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47408 len:8 PRP1 0x0 PRP2 0x0 00:18:22.265 [2024-11-19 01:59:26.261318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.265 [2024-11-19 01:59:26.261346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.265 [2024-11-19 01:59:26.261371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.265 [2024-11-19 01:59:26.261395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47416 len:8 PRP1 0x0 PRP2 0x0 00:18:22.265 [2024-11-19 01:59:26.261406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.265 [2024-11-19 01:59:26.261418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.265 [2024-11-19 01:59:26.261427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.265 [2024-11-19 01:59:26.261436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47424 len:8 PRP1 0x0 PRP2 0x0 00:18:22.265 [2024-11-19 01:59:26.261447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.265 [2024-11-19 01:59:26.261460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.265 [2024-11-19 01:59:26.261468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.265 [2024-11-19 01:59:26.261477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47432 len:8 PRP1 0x0 PRP2 0x0 00:18:22.265 [2024-11-19 01:59:26.261489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.265 [2024-11-19 01:59:26.261501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.265 [2024-11-19 01:59:26.261510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.265 [2024-11-19 01:59:26.261519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47440 len:8 PRP1 0x0 PRP2 0x0 00:18:22.265 [2024-11-19 01:59:26.261530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.265 [2024-11-19 01:59:26.261542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.265 [2024-11-19 01:59:26.261551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.265 [2024-11-19 01:59:26.261560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47448 len:8 PRP1 0x0 PRP2 0x0 00:18:22.265 [2024-11-19 01:59:26.261581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.265 [2024-11-19 01:59:26.261595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.265 [2024-11-19 01:59:26.261604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.265 [2024-11-19 01:59:26.261615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47456 len:8 PRP1 0x0 PRP2 0x0 00:18:22.265 [2024-11-19 01:59:26.261633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.265 [2024-11-19 01:59:26.261647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.265 [2024-11-19 01:59:26.261656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.266 [2024-11-19 01:59:26.261665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47464 len:8 PRP1 0x0 PRP2 0x0 00:18:22.266 [2024-11-19 01:59:26.261677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.266 [2024-11-19 01:59:26.261689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.266 [2024-11-19 01:59:26.261698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.266 [2024-11-19 01:59:26.261707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47472 len:8 PRP1 0x0 PRP2 0x0 00:18:22.266 [2024-11-19 01:59:26.261718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.266 [2024-11-19 01:59:26.261730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.266 [2024-11-19 01:59:26.261739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.266 [2024-11-19 01:59:26.261748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47480 len:8 PRP1 0x0 PRP2 0x0 00:18:22.266 [2024-11-19 01:59:26.261760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.266 [2024-11-19 01:59:26.261772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.266 [2024-11-19 01:59:26.261780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.266 [2024-11-19 01:59:26.261789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47488 len:8 PRP1 0x0 PRP2 0x0 00:18:22.266 [2024-11-19 01:59:26.261801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.266 [2024-11-19 01:59:26.261813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.266 [2024-11-19 01:59:26.261822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.266 [2024-11-19 01:59:26.261831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47496 len:8 PRP1 0x0 PRP2 0x0 00:18:22.266 [2024-11-19 01:59:26.261843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.266 [2024-11-19 01:59:26.261855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.266 [2024-11-19 01:59:26.261863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.266 [2024-11-19 01:59:26.261873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47504 len:8 PRP1 0x0 PRP2 0x0 00:18:22.266 [2024-11-19 01:59:26.261884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.266 [2024-11-19 01:59:26.261922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.266 [2024-11-19 01:59:26.261933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.266 [2024-11-19 01:59:26.261944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47512 len:8 PRP1 0x0 PRP2 0x0 00:18:22.266 [2024-11-19 01:59:26.261957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.266 [2024-11-19 01:59:26.261970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.266 [2024-11-19 01:59:26.261980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.266 [2024-11-19 01:59:26.261999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47520 len:8 PRP1 0x0 PRP2 0x0 00:18:22.266 [2024-11-19 01:59:26.262014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.266 [2024-11-19 01:59:26.262028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.266 [2024-11-19 01:59:26.262038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.266 [2024-11-19 01:59:26.262048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47528 len:8 PRP1 0x0 PRP2 0x0 00:18:22.266 [2024-11-19 01:59:26.262061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.266 [2024-11-19 01:59:26.262075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.266 [2024-11-19 01:59:26.262085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.266 [2024-11-19 01:59:26.262095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47536 len:8 PRP1 0x0 PRP2 0x0 00:18:22.266 [2024-11-19 01:59:26.262108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.266 [2024-11-19 01:59:26.262121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:22.266 [2024-11-19 01:59:26.262132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:22.266 [2024-11-19 01:59:26.262142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47544 len:8 PRP1 0x0 PRP2 0x0 00:18:22.266 [2024-11-19 01:59:26.262155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:22.266 [2024-11-19 01:59:26.262218] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:18:22.266 [2024-11-19 01:59:26.262236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:18:22.266 [2024-11-19 01:59:26.265953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:18:22.266 [2024-11-19 01:59:26.265991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f69c0 (9): Bad file descriptor 00:18:22.266 [2024-11-19 01:59:26.296098] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:18:22.266 9141.70 IOPS, 35.71 MiB/s [2024-11-19T01:59:32.881Z] 9121.55 IOPS, 35.63 MiB/s [2024-11-19T01:59:32.881Z] 9187.42 IOPS, 35.89 MiB/s [2024-11-19T01:59:32.881Z] 9246.23 IOPS, 36.12 MiB/s [2024-11-19T01:59:32.881Z] 9304.36 IOPS, 36.35 MiB/s [2024-11-19T01:59:32.881Z] 9353.13 IOPS, 36.54 MiB/s 00:18:22.266 Latency(us) 00:18:22.266 [2024-11-19T01:59:32.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.266 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:22.266 Verification LBA range: start 0x0 length 0x4000 00:18:22.266 NVMe0n1 : 15.01 9354.77 36.54 244.95 0.00 13303.29 554.82 16920.20 00:18:22.266 [2024-11-19T01:59:32.881Z] =================================================================================================================== 00:18:22.266 [2024-11-19T01:59:32.881Z] Total : 9354.77 36.54 244.95 0.00 13303.29 554.82 16920.20 00:18:22.266 Received shutdown signal, test time was about 15.000000 seconds 00:18:22.266 00:18:22.266 Latency(us) 00:18:22.266 [2024-11-19T01:59:32.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.266 [2024-11-19T01:59:32.881Z] =================================================================================================================== 00:18:22.266 [2024-11-19T01:59:32.881Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:22.266 01:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:18:22.266 01:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:18:22.266 01:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:18:22.266 01:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=90074 00:18:22.266 01:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:18:22.266 01:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 90074 /var/tmp/bdevperf.sock 00:18:22.266 01:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 90074 ']' 00:18:22.266 01:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:22.266 01:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:22.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:22.266 01:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:22.266 01:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:22.266 01:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:22.266 01:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:22.266 01:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:18:22.266 01:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:22.266 [2024-11-19 01:59:32.807178] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:22.266 01:59:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:22.525 [2024-11-19 01:59:33.091522] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:18:22.525 01:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:23.093 NVMe0n1 00:18:23.093 01:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:23.352 00:18:23.352 01:59:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:23.611 00:18:23.611 01:59:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:23.611 01:59:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:18:23.869 01:59:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:24.127 01:59:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:18:27.421 01:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:27.421 01:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:18:27.421 01:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=90149 00:18:27.421 01:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:27.421 01:59:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 90149 00:18:28.356 { 00:18:28.356 "results": [ 00:18:28.356 { 00:18:28.356 "job": "NVMe0n1", 00:18:28.356 "core_mask": "0x1", 00:18:28.356 "workload": "verify", 00:18:28.356 "status": "finished", 00:18:28.356 "verify_range": { 00:18:28.356 "start": 0, 00:18:28.356 "length": 16384 00:18:28.356 }, 00:18:28.356 "queue_depth": 128, 00:18:28.356 "io_size": 4096, 00:18:28.356 "runtime": 1.0056, 00:18:28.356 "iops": 7276.252983293556, 00:18:28.356 "mibps": 28.422863215990454, 00:18:28.356 "io_failed": 0, 00:18:28.356 "io_timeout": 0, 00:18:28.356 "avg_latency_us": 17523.716694621493, 00:18:28.356 "min_latency_us": 2219.287272727273, 00:18:28.356 "max_latency_us": 15847.796363636364 00:18:28.356 } 00:18:28.356 ], 00:18:28.356 "core_count": 1 00:18:28.356 } 00:18:28.356 01:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:28.356 [2024-11-19 01:59:32.250516] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:18:28.356 [2024-11-19 01:59:32.250634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90074 ] 00:18:28.356 [2024-11-19 01:59:32.396009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.356 [2024-11-19 01:59:32.414985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.356 [2024-11-19 01:59:32.444683] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:28.356 [2024-11-19 01:59:34.534499] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:18:28.356 [2024-11-19 01:59:34.534654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.356 [2024-11-19 01:59:34.534681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-11-19 01:59:34.534699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.356 [2024-11-19 01:59:34.534713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-11-19 01:59:34.534727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.356 [2024-11-19 01:59:34.534740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-11-19 01:59:34.534755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:28.356 [2024-11-19 01:59:34.534768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:28.356 [2024-11-19 01:59:34.534782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:18:28.357 [2024-11-19 01:59:34.534833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:18:28.357 [2024-11-19 01:59:34.534864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a39c0 (9): Bad file descriptor 00:18:28.357 [2024-11-19 01:59:34.539447] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:18:28.357 Running I/O for 1 seconds... 00:18:28.357 7189.00 IOPS, 28.08 MiB/s 00:18:28.357 Latency(us) 00:18:28.357 [2024-11-19T01:59:38.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.357 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:28.357 Verification LBA range: start 0x0 length 0x4000 00:18:28.357 NVMe0n1 : 1.01 7276.25 28.42 0.00 0.00 17523.72 2219.29 15847.80 00:18:28.357 [2024-11-19T01:59:38.972Z] =================================================================================================================== 00:18:28.357 [2024-11-19T01:59:38.972Z] Total : 7276.25 28.42 0.00 0.00 17523.72 2219.29 15847.80 00:18:28.357 01:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:28.357 01:59:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:18:28.923 01:59:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:28.923 01:59:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:18:28.923 01:59:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:29.182 01:59:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:29.441 01:59:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:18:32.726 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:32.726 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:18:32.726 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 90074 00:18:32.726 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 90074 ']' 00:18:32.726 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 90074 00:18:32.726 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:32.726 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:32.726 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90074 00:18:32.985 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:32.985 killing process with pid 90074 00:18:32.985 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:32.985 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90074' 00:18:32.985 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 90074 00:18:32.985 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 90074 00:18:32.985 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:18:32.985 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:33.244 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:18:33.244 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:33.244 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:18:33.244 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:33.244 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:18:33.244 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:33.244 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:18:33.244 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:33.244 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:33.244 rmmod nvme_tcp 00:18:33.244 rmmod nvme_fabrics 00:18:33.244 rmmod nvme_keyring 00:18:33.244 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:33.244 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:18:33.244 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:18:33.244 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 89835 ']' 00:18:33.244 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 89835 00:18:33.244 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 89835 ']' 00:18:33.244 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 89835 00:18:33.244 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:33.244 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.244 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89835 00:18:33.244 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:33.244 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:33.244 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89835' 00:18:33.244 killing process with pid 89835 00:18:33.244 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 89835 00:18:33.244 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 89835 00:18:33.502 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:33.502 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:33.502 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:33.502 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:18:33.502 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:18:33.502 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:33.502 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:18:33.502 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:33.502 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:33.503 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:33.503 01:59:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:33.503 01:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:33.503 01:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:33.503 01:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:33.503 01:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:33.503 01:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:33.503 01:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:33.503 01:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:33.503 01:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:33.503 01:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:33.762 01:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:33.762 01:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:33.762 01:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:33.762 01:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.762 01:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:33.762 01:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.762 01:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:18:33.762 00:18:33.762 real 0m31.123s 00:18:33.762 user 2m0.220s 00:18:33.762 sys 0m5.395s 00:18:33.762 01:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:33.762 01:59:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:33.762 ************************************ 00:18:33.762 END TEST nvmf_failover 00:18:33.762 ************************************ 00:18:33.762 01:59:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:33.762 01:59:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:33.762 01:59:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:33.762 01:59:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.762 ************************************ 00:18:33.762 START TEST nvmf_host_discovery 00:18:33.762 ************************************ 00:18:33.762 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:33.762 * Looking for test storage... 00:18:33.762 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:33.762 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:33.762 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:18:33.762 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:34.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.022 --rc genhtml_branch_coverage=1 00:18:34.022 --rc genhtml_function_coverage=1 00:18:34.022 --rc genhtml_legend=1 00:18:34.022 --rc geninfo_all_blocks=1 00:18:34.022 --rc geninfo_unexecuted_blocks=1 00:18:34.022 00:18:34.022 ' 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:34.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.022 --rc genhtml_branch_coverage=1 00:18:34.022 --rc genhtml_function_coverage=1 00:18:34.022 --rc genhtml_legend=1 00:18:34.022 --rc geninfo_all_blocks=1 00:18:34.022 --rc geninfo_unexecuted_blocks=1 00:18:34.022 00:18:34.022 ' 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:34.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.022 --rc genhtml_branch_coverage=1 00:18:34.022 --rc genhtml_function_coverage=1 00:18:34.022 --rc genhtml_legend=1 00:18:34.022 --rc geninfo_all_blocks=1 00:18:34.022 --rc geninfo_unexecuted_blocks=1 00:18:34.022 00:18:34.022 ' 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:34.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.022 --rc genhtml_branch_coverage=1 00:18:34.022 --rc genhtml_function_coverage=1 00:18:34.022 --rc genhtml_legend=1 00:18:34.022 --rc geninfo_all_blocks=1 00:18:34.022 --rc geninfo_unexecuted_blocks=1 00:18:34.022 00:18:34.022 ' 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.022 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:34.023 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:34.023 Cannot find device "nvmf_init_br" 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:34.023 Cannot find device "nvmf_init_br2" 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:34.023 Cannot find device "nvmf_tgt_br" 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:34.023 Cannot find device "nvmf_tgt_br2" 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:34.023 Cannot find device "nvmf_init_br" 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:34.023 Cannot find device "nvmf_init_br2" 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:34.023 Cannot find device "nvmf_tgt_br" 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:34.023 Cannot find device "nvmf_tgt_br2" 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:34.023 Cannot find device "nvmf_br" 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:34.023 Cannot find device "nvmf_init_if" 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:34.023 Cannot find device "nvmf_init_if2" 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:34.023 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:34.023 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:34.023 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:34.283 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:34.283 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.109 ms 00:18:34.283 00:18:34.283 --- 10.0.0.3 ping statistics --- 00:18:34.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.283 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:34.283 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:34.283 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:18:34.283 00:18:34.283 --- 10.0.0.4 ping statistics --- 00:18:34.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.283 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:34.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:34.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:18:34.283 00:18:34.283 --- 10.0.0.1 ping statistics --- 00:18:34.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.283 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:34.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:34.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:18:34.283 00:18:34.283 --- 10.0.0.2 ping statistics --- 00:18:34.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.283 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=90469 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 90469 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 90469 ']' 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:34.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:34.283 01:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:34.542 [2024-11-19 01:59:44.949812] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:18:34.542 [2024-11-19 01:59:44.949941] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.542 [2024-11-19 01:59:45.102674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.542 [2024-11-19 01:59:45.125824] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.542 [2024-11-19 01:59:45.125886] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.542 [2024-11-19 01:59:45.125924] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:34.542 [2024-11-19 01:59:45.125946] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:34.542 [2024-11-19 01:59:45.125954] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.542 [2024-11-19 01:59:45.126300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.801 [2024-11-19 01:59:45.160218] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:35.378 01:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:35.378 01:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:18:35.378 01:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:35.378 01:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:35.378 01:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.378 01:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.378 01:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:35.378 01:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.378 01:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.378 [2024-11-19 01:59:45.879056] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:35.378 01:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.378 01:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:18:35.378 01:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.378 01:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.378 [2024-11-19 01:59:45.887132] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:18:35.378 01:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.378 01:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:18:35.378 01:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.378 01:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.378 null0 00:18:35.378 01:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.378 01:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:18:35.378 01:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.378 01:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.378 null1 00:18:35.378 01:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.378 01:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:18:35.378 01:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.378 01:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.378 01:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.378 01:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=90501 00:18:35.378 01:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:18:35.378 01:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 90501 /tmp/host.sock 00:18:35.378 01:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 90501 ']' 00:18:35.378 01:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:18:35.378 01:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:35.378 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:35.378 01:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:35.378 01:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:35.378 01:59:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.378 [2024-11-19 01:59:45.974818] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:18:35.378 [2024-11-19 01:59:45.974914] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90501 ] 00:18:35.638 [2024-11-19 01:59:46.128651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.638 [2024-11-19 01:59:46.151957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.638 [2024-11-19 01:59:46.183947] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:35.638 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:35.638 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:18:35.638 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:35.638 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:18:35.638 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.638 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.638 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.638 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:18:35.638 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.638 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.638 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.638 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:18:35.638 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:18:35.638 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:35.638 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:35.638 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.638 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:35.638 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.638 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:35.898 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.898 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:18:35.898 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:18:35.898 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:35.898 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:35.898 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:35.898 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.898 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:35.898 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.898 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.898 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:18:35.898 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:18:35.898 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.898 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.898 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.898 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:18:35.898 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:35.899 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:35.899 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:35.899 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:35.899 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.899 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.899 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.899 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:18:35.899 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:18:35.899 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:35.899 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:35.899 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.899 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.899 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:35.899 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:35.899 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.899 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:18:35.899 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:18:35.899 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.899 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.899 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.899 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:18:35.899 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:35.899 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:35.899 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.899 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.899 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:35.899 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:35.899 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.899 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:18:35.899 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:18:35.899 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:35.899 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:35.899 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:35.899 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.899 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.899 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:36.159 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.159 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:18:36.159 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:36.159 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.159 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.160 [2024-11-19 01:59:46.555346] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.160 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.419 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:18:36.420 01:59:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:18:36.679 [2024-11-19 01:59:47.251885] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:36.679 [2024-11-19 01:59:47.251933] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:36.679 [2024-11-19 01:59:47.251956] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:36.679 [2024-11-19 01:59:47.257954] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:18:36.938 [2024-11-19 01:59:47.312348] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:18:36.938 [2024-11-19 01:59:47.313195] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x11d59c0:1 started. 00:18:36.938 [2024-11-19 01:59:47.314945] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:36.938 [2024-11-19 01:59:47.314983] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:18:36.938 [2024-11-19 01:59:47.320405] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x11d59c0 was disconnected and freed. delete nvme_qpair. 00:18:37.198 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:37.198 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:37.198 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:37.198 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:37.198 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:37.198 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:37.198 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.198 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.198 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:37.457 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.457 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.457 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:37.457 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:37.457 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:37.457 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:37.457 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:37.457 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:18:37.457 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:37.457 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:37.457 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:37.458 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.458 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:37.458 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.458 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:37.458 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.458 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:18:37.458 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:37.458 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:37.458 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:37.458 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:37.458 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:37.458 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:18:37.458 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:18:37.458 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:37.458 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:37.458 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:37.458 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:37.458 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.458 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.458 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.458 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:18:37.458 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:37.458 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:18:37.458 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:37.458 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:37.458 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:37.458 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:37.458 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:37.458 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:37.458 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:37.458 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:37.458 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:37.458 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.458 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.458 01:59:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.458 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:37.458 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:18:37.458 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:37.458 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:37.458 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:18:37.458 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.458 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.458 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.458 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:37.458 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:37.458 [2024-11-19 01:59:48.034003] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x11bfce0:1 started. 00:18:37.458 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:37.458 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:37.458 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:37.458 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:37.458 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:37.458 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:37.458 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:37.458 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:37.458 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.458 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.458 [2024-11-19 01:59:48.041332] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x11bfce0 was disconnected and freed. delete nvme_qpair. 00:18:37.458 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.719 [2024-11-19 01:59:48.156909] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:37.719 [2024-11-19 01:59:48.157117] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:18:37.719 [2024-11-19 01:59:48.157141] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:37.719 [2024-11-19 01:59:48.163126] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:37.719 [2024-11-19 01:59:48.225469] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:18:37.719 [2024-11-19 01:59:48.225549] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:37.719 [2024-11-19 01:59:48.225562] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:18:37.719 [2024-11-19 01:59:48.225567] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:37.719 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.979 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:37.979 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:37.979 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:37.979 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:37.979 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:37.979 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.979 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.979 [2024-11-19 01:59:48.389453] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:18:37.979 [2024-11-19 01:59:48.389481] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:37.979 [2024-11-19 01:59:48.392466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.979 [2024-11-19 01:59:48.392526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.979 [2024-11-19 01:59:48.392573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.979 [2024-11-19 01:59:48.392582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.980 [2024-11-19 01:59:48.392591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.980 [2024-11-19 01:59:48.392600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.980 [2024-11-19 01:59:48.392609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:37.980 [2024-11-19 01:59:48.392617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:37.980 [2024-11-19 01:59:48.392626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b1080 is same with the state(6) to be set 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:37.980 [2024-11-19 01:59:48.395711] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:18:37.980 [2024-11-19 01:59:48.395736] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:37.980 [2024-11-19 01:59:48.395788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b1080 (9): Bad file descriptor 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.980 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.240 01:59:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:39.617 [2024-11-19 01:59:49.812780] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:39.617 [2024-11-19 01:59:49.812803] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:39.617 [2024-11-19 01:59:49.812819] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:39.617 [2024-11-19 01:59:49.818811] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:18:39.617 [2024-11-19 01:59:49.877080] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:18:39.617 [2024-11-19 01:59:49.877931] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x11a21e0:1 started. 00:18:39.617 [2024-11-19 01:59:49.879659] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:39.617 [2024-11-19 01:59:49.879691] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:18:39.617 01:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.617 01:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:39.617 01:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:18:39.617 [2024-11-19 01:59:49.881690] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x11a21e0 was disconnected and freed. delete nvme_qpair. 00:18:39.617 01:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:39.617 01:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:39.617 01:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.617 01:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:39.617 01:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.617 01:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:39.617 01:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.617 01:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:39.617 request: 00:18:39.617 { 00:18:39.617 "name": "nvme", 00:18:39.617 "trtype": "tcp", 00:18:39.617 "traddr": "10.0.0.3", 00:18:39.617 "adrfam": "ipv4", 00:18:39.617 "trsvcid": "8009", 00:18:39.617 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:39.617 "wait_for_attach": true, 00:18:39.617 "method": "bdev_nvme_start_discovery", 00:18:39.617 "req_id": 1 00:18:39.617 } 00:18:39.617 Got JSON-RPC error response 00:18:39.617 response: 00:18:39.617 { 00:18:39.617 "code": -17, 00:18:39.617 "message": "File exists" 00:18:39.617 } 00:18:39.617 01:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:39.617 01:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:18:39.617 01:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:39.617 01:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:39.617 01:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:39.617 01:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:18:39.617 01:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:39.617 01:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:39.617 01:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.617 01:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:39.618 01:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:39.618 01:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:39.618 01:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.618 01:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:18:39.618 01:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:18:39.618 01:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:39.618 01:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:39.618 01:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:39.618 01:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.618 01:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:39.618 01:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:39.618 01:59:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:39.618 request: 00:18:39.618 { 00:18:39.618 "name": "nvme_second", 00:18:39.618 "trtype": "tcp", 00:18:39.618 "traddr": "10.0.0.3", 00:18:39.618 "adrfam": "ipv4", 00:18:39.618 "trsvcid": "8009", 00:18:39.618 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:39.618 "wait_for_attach": true, 00:18:39.618 "method": "bdev_nvme_start_discovery", 00:18:39.618 "req_id": 1 00:18:39.618 } 00:18:39.618 Got JSON-RPC error response 00:18:39.618 response: 00:18:39.618 { 00:18:39.618 "code": -17, 00:18:39.618 "message": "File exists" 00:18:39.618 } 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.618 01:59:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:40.557 [2024-11-19 01:59:51.156166] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:40.557 [2024-11-19 01:59:51.156239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11bf390 with addr=10.0.0.3, port=8010 00:18:40.558 [2024-11-19 01:59:51.156257] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:40.558 [2024-11-19 01:59:51.156267] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:40.558 [2024-11-19 01:59:51.156275] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:18:41.936 [2024-11-19 01:59:52.156168] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:41.936 [2024-11-19 01:59:52.156435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d51a0 with addr=10.0.0.3, port=8010 00:18:41.936 [2024-11-19 01:59:52.156471] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:41.936 [2024-11-19 01:59:52.156483] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:41.936 [2024-11-19 01:59:52.156493] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:18:42.946 [2024-11-19 01:59:53.156049] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:18:42.946 request: 00:18:42.946 { 00:18:42.946 "name": "nvme_second", 00:18:42.946 "trtype": "tcp", 00:18:42.946 "traddr": "10.0.0.3", 00:18:42.946 "adrfam": "ipv4", 00:18:42.946 "trsvcid": "8010", 00:18:42.946 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:42.946 "wait_for_attach": false, 00:18:42.946 "attach_timeout_ms": 3000, 00:18:42.946 "method": "bdev_nvme_start_discovery", 00:18:42.946 "req_id": 1 00:18:42.946 } 00:18:42.946 Got JSON-RPC error response 00:18:42.946 response: 00:18:42.946 { 00:18:42.946 "code": -110, 00:18:42.946 "message": "Connection timed out" 00:18:42.946 } 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 90501 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:42.946 rmmod nvme_tcp 00:18:42.946 rmmod nvme_fabrics 00:18:42.946 rmmod nvme_keyring 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 90469 ']' 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 90469 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 90469 ']' 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 90469 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90469 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:42.946 killing process with pid 90469 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90469' 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 90469 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 90469 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:42.946 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:43.206 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:43.206 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:43.206 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:43.206 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:43.206 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:43.206 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:43.206 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:43.206 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:43.206 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:43.206 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:43.206 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:43.206 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:43.206 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:43.206 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.206 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:43.206 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.206 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:18:43.206 00:18:43.206 real 0m9.527s 00:18:43.206 user 0m17.540s 00:18:43.206 sys 0m1.910s 00:18:43.206 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:43.206 ************************************ 00:18:43.206 END TEST nvmf_host_discovery 00:18:43.206 ************************************ 00:18:43.206 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:43.206 01:59:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:43.206 01:59:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:43.206 01:59:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:43.206 01:59:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.466 ************************************ 00:18:43.466 START TEST nvmf_host_multipath_status 00:18:43.466 ************************************ 00:18:43.466 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:43.466 * Looking for test storage... 00:18:43.466 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:43.466 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:43.466 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:43.466 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:18:43.466 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:43.466 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:43.466 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:43.466 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:43.466 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:18:43.466 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:18:43.466 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:18:43.466 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:18:43.466 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:18:43.466 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:18:43.466 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:18:43.466 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:43.466 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:18:43.466 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:18:43.466 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:43.466 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:43.466 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:18:43.466 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:18:43.466 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:43.466 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:18:43.466 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:18:43.466 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:18:43.466 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:18:43.466 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:43.466 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:18:43.466 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:18:43.466 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:43.466 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:43.466 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:18:43.466 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:43.466 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:43.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.466 --rc genhtml_branch_coverage=1 00:18:43.466 --rc genhtml_function_coverage=1 00:18:43.466 --rc genhtml_legend=1 00:18:43.466 --rc geninfo_all_blocks=1 00:18:43.466 --rc geninfo_unexecuted_blocks=1 00:18:43.466 00:18:43.466 ' 00:18:43.466 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:43.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.466 --rc genhtml_branch_coverage=1 00:18:43.466 --rc genhtml_function_coverage=1 00:18:43.466 --rc genhtml_legend=1 00:18:43.466 --rc geninfo_all_blocks=1 00:18:43.466 --rc geninfo_unexecuted_blocks=1 00:18:43.467 00:18:43.467 ' 00:18:43.467 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:43.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.467 --rc genhtml_branch_coverage=1 00:18:43.467 --rc genhtml_function_coverage=1 00:18:43.467 --rc genhtml_legend=1 00:18:43.467 --rc geninfo_all_blocks=1 00:18:43.467 --rc geninfo_unexecuted_blocks=1 00:18:43.467 00:18:43.467 ' 00:18:43.467 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:43.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.467 --rc genhtml_branch_coverage=1 00:18:43.467 --rc genhtml_function_coverage=1 00:18:43.467 --rc genhtml_legend=1 00:18:43.467 --rc geninfo_all_blocks=1 00:18:43.467 --rc geninfo_unexecuted_blocks=1 00:18:43.467 00:18:43.467 ' 00:18:43.467 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:43.467 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:18:43.467 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:43.467 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:43.467 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:43.467 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:43.467 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:43.467 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:43.467 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:43.467 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:43.467 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:43.467 01:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:43.467 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:43.467 Cannot find device "nvmf_init_br" 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:18:43.467 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:43.468 Cannot find device "nvmf_init_br2" 00:18:43.468 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:18:43.468 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:43.468 Cannot find device "nvmf_tgt_br" 00:18:43.468 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:18:43.468 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:43.468 Cannot find device "nvmf_tgt_br2" 00:18:43.468 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:18:43.468 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:43.468 Cannot find device "nvmf_init_br" 00:18:43.468 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:18:43.468 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:43.726 Cannot find device "nvmf_init_br2" 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:43.726 Cannot find device "nvmf_tgt_br" 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:43.726 Cannot find device "nvmf_tgt_br2" 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:43.726 Cannot find device "nvmf_br" 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:43.726 Cannot find device "nvmf_init_if" 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:43.726 Cannot find device "nvmf_init_if2" 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:43.726 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:43.726 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:43.726 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:43.985 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:43.985 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:43.985 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:43.985 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:43.985 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:43.985 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:43.985 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:43.985 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:43.985 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:18:43.985 00:18:43.985 --- 10.0.0.3 ping statistics --- 00:18:43.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.985 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:18:43.985 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:43.985 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:43.985 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.032 ms 00:18:43.985 00:18:43.985 --- 10.0.0.4 ping statistics --- 00:18:43.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.985 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:18:43.985 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:43.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:43.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:18:43.985 00:18:43.985 --- 10.0.0.1 ping statistics --- 00:18:43.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.985 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:18:43.985 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:43.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:43.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.035 ms 00:18:43.985 00:18:43.985 --- 10.0.0.2 ping statistics --- 00:18:43.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.985 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:18:43.985 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:43.985 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:18:43.985 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:43.985 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:43.985 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:43.985 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:43.985 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:43.985 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:43.985 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:43.985 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:18:43.985 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:43.985 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:43.985 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:43.985 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=90999 00:18:43.985 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 90999 00:18:43.985 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:43.985 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 90999 ']' 00:18:43.985 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.985 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:43.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.985 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.985 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:43.985 01:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:43.985 [2024-11-19 01:59:54.462193] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:18:43.985 [2024-11-19 01:59:54.462303] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.243 [2024-11-19 01:59:54.613947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:44.243 [2024-11-19 01:59:54.636331] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:44.243 [2024-11-19 01:59:54.636400] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:44.243 [2024-11-19 01:59:54.636415] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:44.243 [2024-11-19 01:59:54.636425] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:44.243 [2024-11-19 01:59:54.636435] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:44.243 [2024-11-19 01:59:54.637296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.243 [2024-11-19 01:59:54.637308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.243 [2024-11-19 01:59:54.669515] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:44.808 01:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:44.808 01:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:18:44.808 01:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:44.808 01:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:44.808 01:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:45.066 01:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.066 01:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=90999 00:18:45.066 01:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:45.066 [2024-11-19 01:59:55.667834] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:45.325 01:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:45.325 Malloc0 00:18:45.325 01:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:45.584 01:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:45.842 01:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:46.100 [2024-11-19 01:59:56.584751] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:46.100 01:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:46.358 [2024-11-19 01:59:56.864969] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:46.358 01:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=91055 00:18:46.358 01:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:46.358 01:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 91055 /var/tmp/bdevperf.sock 00:18:46.358 01:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 91055 ']' 00:18:46.358 01:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:46.358 01:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:46.358 01:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:46.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:46.358 01:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:46.358 01:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:46.358 01:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:46.617 01:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:46.617 01:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:18:46.617 01:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:46.875 01:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:47.133 Nvme0n1 00:18:47.133 01:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:47.701 Nvme0n1 00:18:47.701 01:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:18:47.701 01:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:49.610 02:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:18:49.610 02:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:18:49.869 02:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:50.127 02:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:18:51.065 02:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:18:51.065 02:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:51.065 02:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:51.065 02:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:51.323 02:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:51.323 02:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:51.323 02:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:51.323 02:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:51.582 02:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:51.582 02:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:51.842 02:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:51.842 02:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:52.101 02:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:52.101 02:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:52.101 02:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:52.101 02:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:52.362 02:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:52.362 02:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:52.362 02:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:52.362 02:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:52.622 02:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:52.622 02:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:52.622 02:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:52.622 02:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:52.881 02:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:52.881 02:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:18:52.881 02:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:53.140 02:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:53.399 02:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:18:54.335 02:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:18:54.335 02:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:54.335 02:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:54.335 02:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:54.594 02:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:54.594 02:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:54.594 02:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:54.594 02:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:54.853 02:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:54.853 02:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:54.853 02:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:54.853 02:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:55.112 02:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:55.112 02:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:55.112 02:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:55.112 02:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:55.371 02:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:55.371 02:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:55.371 02:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:55.371 02:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:55.630 02:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:55.630 02:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:55.630 02:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:55.630 02:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:55.889 02:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:55.889 02:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:18:55.889 02:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:56.148 02:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:18:56.407 02:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:18:57.387 02:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:18:57.388 02:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:57.388 02:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:57.388 02:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:57.956 02:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:57.956 02:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:57.956 02:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:57.956 02:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:57.956 02:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:57.956 02:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:57.956 02:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:57.956 02:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:58.523 02:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:58.523 02:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:58.523 02:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:58.523 02:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:58.523 02:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:58.523 02:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:58.523 02:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:58.524 02:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:58.782 02:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:58.782 02:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:58.782 02:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:58.782 02:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:59.040 02:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:59.040 02:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:18:59.040 02:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:59.299 02:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:59.558 02:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:19:00.941 02:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:19:00.941 02:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:00.941 02:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:00.941 02:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:00.941 02:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:00.941 02:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:00.941 02:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:00.941 02:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:01.199 02:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:01.199 02:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:01.199 02:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:01.199 02:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:01.458 02:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:01.458 02:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:01.458 02:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:01.458 02:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:01.716 02:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:01.716 02:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:01.716 02:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:01.716 02:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:01.976 02:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:01.976 02:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:01.976 02:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:01.976 02:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:02.235 02:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:02.235 02:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:19:02.235 02:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:02.493 02:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:02.752 02:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:19:03.688 02:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:19:03.688 02:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:03.688 02:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:03.688 02:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:03.947 02:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:03.947 02:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:03.947 02:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:03.947 02:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:04.205 02:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:04.205 02:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:04.205 02:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.205 02:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:04.462 02:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:04.462 02:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:04.462 02:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.462 02:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:04.721 02:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:04.721 02:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:04.721 02:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.721 02:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:04.985 02:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:04.985 02:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:04.985 02:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.985 02:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:05.244 02:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:05.244 02:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:19:05.244 02:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:05.503 02:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:05.762 02:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:19:06.699 02:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:19:06.699 02:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:06.699 02:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:06.699 02:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:06.958 02:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:06.958 02:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:06.958 02:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:06.958 02:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.217 02:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:07.217 02:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:07.217 02:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.217 02:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:07.476 02:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:07.476 02:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:07.477 02:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.477 02:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:07.736 02:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:07.736 02:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:07.736 02:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.736 02:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:07.995 02:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:07.995 02:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:07.995 02:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:07.995 02:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:08.254 02:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:08.254 02:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:19:08.513 02:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:19:08.513 02:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:19:08.785 02:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:09.045 02:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:19:09.982 02:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:19:09.982 02:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:09.983 02:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:09.983 02:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:10.242 02:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:10.242 02:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:10.242 02:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.242 02:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:10.500 02:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:10.500 02:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:10.500 02:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.500 02:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:10.759 02:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:10.759 02:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:10.760 02:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.760 02:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:11.019 02:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:11.019 02:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:11.019 02:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:11.019 02:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:11.278 02:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:11.278 02:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:11.278 02:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:11.278 02:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:11.537 02:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:11.537 02:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:19:11.538 02:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:11.797 02:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:12.057 02:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:19:12.995 02:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:19:12.995 02:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:12.995 02:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:12.995 02:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:13.255 02:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:13.255 02:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:13.255 02:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:13.255 02:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:13.513 02:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:13.513 02:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:13.513 02:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:13.513 02:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:13.772 02:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:13.772 02:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:13.772 02:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:13.772 02:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:14.341 02:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:14.341 02:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:14.341 02:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:14.341 02:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:14.341 02:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:14.341 02:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:14.341 02:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:14.341 02:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:14.600 02:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:14.600 02:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:19:14.600 02:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:14.859 02:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:19:15.118 02:00:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:19:16.055 02:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:19:16.055 02:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:16.055 02:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:16.055 02:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:16.314 02:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:16.314 02:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:16.314 02:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:16.314 02:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:16.573 02:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:16.573 02:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:16.831 02:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:16.831 02:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:17.091 02:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:17.091 02:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:17.091 02:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.091 02:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:17.091 02:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:17.091 02:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:17.091 02:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:17.091 02:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.659 02:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:17.659 02:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:17.659 02:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:17.659 02:00:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.659 02:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:17.659 02:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:19:17.659 02:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:17.918 02:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:18.177 02:00:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:19:19.552 02:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:19:19.552 02:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:19.552 02:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:19.552 02:00:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:19.552 02:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:19.552 02:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:19.552 02:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:19.552 02:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:19.811 02:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:19.811 02:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:19.811 02:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:19.811 02:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:20.071 02:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:20.071 02:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:20.071 02:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.071 02:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:20.330 02:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:20.330 02:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:20.330 02:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.330 02:00:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:20.589 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:20.589 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:20.589 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.589 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:20.849 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:20.849 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 91055 00:19:20.849 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 91055 ']' 00:19:20.849 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 91055 00:19:20.849 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:19:20.849 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:20.849 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91055 00:19:20.849 killing process with pid 91055 00:19:20.849 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:20.849 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:20.849 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91055' 00:19:20.849 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 91055 00:19:20.849 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 91055 00:19:20.849 { 00:19:20.849 "results": [ 00:19:20.849 { 00:19:20.849 "job": "Nvme0n1", 00:19:20.849 "core_mask": "0x4", 00:19:20.849 "workload": "verify", 00:19:20.849 "status": "terminated", 00:19:20.849 "verify_range": { 00:19:20.849 "start": 0, 00:19:20.849 "length": 16384 00:19:20.849 }, 00:19:20.849 "queue_depth": 128, 00:19:20.849 "io_size": 4096, 00:19:20.849 "runtime": 33.17767, 00:19:20.849 "iops": 9618.668218714574, 00:19:20.849 "mibps": 37.572922729353806, 00:19:20.849 "io_failed": 0, 00:19:20.849 "io_timeout": 0, 00:19:20.849 "avg_latency_us": 13279.930770336503, 00:19:20.849 "min_latency_us": 160.11636363636364, 00:19:20.849 "max_latency_us": 4026531.84 00:19:20.849 } 00:19:20.849 ], 00:19:20.849 "core_count": 1 00:19:20.849 } 00:19:20.849 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 91055 00:19:20.849 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:21.112 [2024-11-19 01:59:56.925199] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:19:21.112 [2024-11-19 01:59:56.925295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91055 ] 00:19:21.112 [2024-11-19 01:59:57.063395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.112 [2024-11-19 01:59:57.082925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.112 [2024-11-19 01:59:57.111351] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:21.112 Running I/O for 90 seconds... 00:19:21.112 7972.00 IOPS, 31.14 MiB/s [2024-11-19T02:00:31.727Z] 8010.00 IOPS, 31.29 MiB/s [2024-11-19T02:00:31.727Z] 8379.33 IOPS, 32.73 MiB/s [2024-11-19T02:00:31.727Z] 8762.00 IOPS, 34.23 MiB/s [2024-11-19T02:00:31.727Z] 8947.20 IOPS, 34.95 MiB/s [2024-11-19T02:00:31.727Z] 9147.00 IOPS, 35.73 MiB/s [2024-11-19T02:00:31.727Z] 9335.57 IOPS, 36.47 MiB/s [2024-11-19T02:00:31.727Z] 9469.00 IOPS, 36.99 MiB/s [2024-11-19T02:00:31.727Z] 9575.67 IOPS, 37.40 MiB/s [2024-11-19T02:00:31.727Z] 9693.30 IOPS, 37.86 MiB/s [2024-11-19T02:00:31.727Z] 9770.64 IOPS, 38.17 MiB/s [2024-11-19T02:00:31.727Z] 9827.08 IOPS, 38.39 MiB/s [2024-11-19T02:00:31.727Z] 9910.54 IOPS, 38.71 MiB/s [2024-11-19T02:00:31.727Z] 9959.21 IOPS, 38.90 MiB/s [2024-11-19T02:00:31.727Z] [2024-11-19 02:00:12.867514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:126248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.112 [2024-11-19 02:00:12.867576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:21.112 [2024-11-19 02:00:12.867644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:126256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.112 [2024-11-19 02:00:12.867664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:21.112 [2024-11-19 02:00:12.867684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:126264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.112 [2024-11-19 02:00:12.867698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:21.112 [2024-11-19 02:00:12.867717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:126272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.112 [2024-11-19 02:00:12.867731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:21.112 [2024-11-19 02:00:12.867749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:126280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.112 [2024-11-19 02:00:12.867762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:21.112 [2024-11-19 02:00:12.867780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.112 [2024-11-19 02:00:12.867793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:21.112 [2024-11-19 02:00:12.867812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:126296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.112 [2024-11-19 02:00:12.867825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:21.112 [2024-11-19 02:00:12.867844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:126304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.112 [2024-11-19 02:00:12.867857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:21.112 [2024-11-19 02:00:12.867875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.112 [2024-11-19 02:00:12.867888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:21.112 [2024-11-19 02:00:12.867935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.112 [2024-11-19 02:00:12.867951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:21.112 [2024-11-19 02:00:12.867969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.112 [2024-11-19 02:00:12.867982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:21.112 [2024-11-19 02:00:12.868001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:125952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.112 [2024-11-19 02:00:12.868014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:21.112 [2024-11-19 02:00:12.868033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:125960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.112 [2024-11-19 02:00:12.868046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:21.112 [2024-11-19 02:00:12.868064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:125968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.112 [2024-11-19 02:00:12.868078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:21.112 [2024-11-19 02:00:12.868097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:125976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.112 [2024-11-19 02:00:12.868110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:21.112 [2024-11-19 02:00:12.868128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:125984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.112 [2024-11-19 02:00:12.868141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:21.112 [2024-11-19 02:00:12.868327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:126312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.112 [2024-11-19 02:00:12.868350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:21.112 [2024-11-19 02:00:12.868373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:126320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.113 [2024-11-19 02:00:12.868387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.868408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:126328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.113 [2024-11-19 02:00:12.868421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.868441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:126336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.113 [2024-11-19 02:00:12.868455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.868474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:126344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.113 [2024-11-19 02:00:12.868487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.868536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:126352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.113 [2024-11-19 02:00:12.868566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.868589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:126360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.113 [2024-11-19 02:00:12.868604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.868624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:126368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.113 [2024-11-19 02:00:12.868639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.868658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:126376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.113 [2024-11-19 02:00:12.868673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.868692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:126384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.113 [2024-11-19 02:00:12.868706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.868726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:126392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.113 [2024-11-19 02:00:12.868740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.868760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:126400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.113 [2024-11-19 02:00:12.868774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.868794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:126408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.113 [2024-11-19 02:00:12.868808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.868844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:126416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.113 [2024-11-19 02:00:12.868859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.868879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:126424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.113 [2024-11-19 02:00:12.868894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.868929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:126432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.113 [2024-11-19 02:00:12.868958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.868978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:126440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.113 [2024-11-19 02:00:12.868991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.869011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:126448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.113 [2024-11-19 02:00:12.869049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.869070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.113 [2024-11-19 02:00:12.869085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.869104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:126464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.113 [2024-11-19 02:00:12.869118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.869138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:126472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.113 [2024-11-19 02:00:12.869167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.869186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:126480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.113 [2024-11-19 02:00:12.869199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.869219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:126488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.113 [2024-11-19 02:00:12.869233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.869252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:126496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.113 [2024-11-19 02:00:12.869265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.869285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:125992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.113 [2024-11-19 02:00:12.869298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.869318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.113 [2024-11-19 02:00:12.869332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.869366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.113 [2024-11-19 02:00:12.869380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.869398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:126016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.113 [2024-11-19 02:00:12.869411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.869430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.113 [2024-11-19 02:00:12.869443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.869462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:126032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.113 [2024-11-19 02:00:12.869482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.869502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.113 [2024-11-19 02:00:12.869516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.869535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:126048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.113 [2024-11-19 02:00:12.869548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.869592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:126504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.113 [2024-11-19 02:00:12.869612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.869633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:126512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.113 [2024-11-19 02:00:12.869646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.869681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:126520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.113 [2024-11-19 02:00:12.869695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.869714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:126528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.113 [2024-11-19 02:00:12.869727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.869747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.113 [2024-11-19 02:00:12.869761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.869781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:126544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.113 [2024-11-19 02:00:12.869796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.869815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:126552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.113 [2024-11-19 02:00:12.869829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:21.113 [2024-11-19 02:00:12.869848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:126560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.113 [2024-11-19 02:00:12.869861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.869881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:126568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.114 [2024-11-19 02:00:12.869894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.869940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:126576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.114 [2024-11-19 02:00:12.869969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.869994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:126584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.114 [2024-11-19 02:00:12.870010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.870034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:126592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.114 [2024-11-19 02:00:12.870050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.870073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:126600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.114 [2024-11-19 02:00:12.870089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.870111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:126608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.114 [2024-11-19 02:00:12.870128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.870150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:126616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.114 [2024-11-19 02:00:12.870166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.870189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.114 [2024-11-19 02:00:12.870205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.870227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.114 [2024-11-19 02:00:12.870272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.870306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:126064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.114 [2024-11-19 02:00:12.870320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.870354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.114 [2024-11-19 02:00:12.870368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.870386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.114 [2024-11-19 02:00:12.870399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.870418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.114 [2024-11-19 02:00:12.870431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.870450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:126096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.114 [2024-11-19 02:00:12.870463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.870489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:126104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.114 [2024-11-19 02:00:12.870504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.870522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.114 [2024-11-19 02:00:12.870536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.870555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.114 [2024-11-19 02:00:12.870568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.870599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.114 [2024-11-19 02:00:12.870613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.870648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.114 [2024-11-19 02:00:12.870662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.870682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:126144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.114 [2024-11-19 02:00:12.870696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.870715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.114 [2024-11-19 02:00:12.870729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.870749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:126160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.114 [2024-11-19 02:00:12.870763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.870783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:126168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.114 [2024-11-19 02:00:12.870797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.870816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.114 [2024-11-19 02:00:12.870830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.870849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:126632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.114 [2024-11-19 02:00:12.870863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.870882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:126640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.114 [2024-11-19 02:00:12.870896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.870923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:126648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.114 [2024-11-19 02:00:12.870938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.870957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:126656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.114 [2024-11-19 02:00:12.870971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.870990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:126664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.114 [2024-11-19 02:00:12.871005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.871024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:126672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.114 [2024-11-19 02:00:12.871038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.871072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:126680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.114 [2024-11-19 02:00:12.871085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.871104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:126688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.114 [2024-11-19 02:00:12.871117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.871136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:126696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.114 [2024-11-19 02:00:12.871149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.871168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:126704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.114 [2024-11-19 02:00:12.871181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.871200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:126712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.114 [2024-11-19 02:00:12.871214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.871232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.114 [2024-11-19 02:00:12.871250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.871270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:126728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.114 [2024-11-19 02:00:12.871283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.871303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:126736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.114 [2024-11-19 02:00:12.871316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:21.114 [2024-11-19 02:00:12.871335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:126744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.115 [2024-11-19 02:00:12.871356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:21.115 [2024-11-19 02:00:12.871376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:126752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.115 [2024-11-19 02:00:12.871389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.115 [2024-11-19 02:00:12.871408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.115 [2024-11-19 02:00:12.871421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:21.115 [2024-11-19 02:00:12.871441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.115 [2024-11-19 02:00:12.871454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:21.115 [2024-11-19 02:00:12.871473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.115 [2024-11-19 02:00:12.871486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:21.115 [2024-11-19 02:00:12.871504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.115 [2024-11-19 02:00:12.871518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:21.115 [2024-11-19 02:00:12.871562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:126216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.115 [2024-11-19 02:00:12.871578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:21.115 [2024-11-19 02:00:12.871614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:126224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.115 [2024-11-19 02:00:12.871628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:21.115 [2024-11-19 02:00:12.871648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.115 [2024-11-19 02:00:12.871679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:21.115 [2024-11-19 02:00:12.872332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:126240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.115 [2024-11-19 02:00:12.872359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:21.115 [2024-11-19 02:00:12.872389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:126760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.115 [2024-11-19 02:00:12.872405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:21.115 [2024-11-19 02:00:12.872431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:126768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.115 [2024-11-19 02:00:12.872445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:21.115 [2024-11-19 02:00:12.872470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:126776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.115 [2024-11-19 02:00:12.872495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:21.115 [2024-11-19 02:00:12.872538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:126784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.115 [2024-11-19 02:00:12.872556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:21.115 [2024-11-19 02:00:12.872582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:126792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.115 [2024-11-19 02:00:12.872628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:21.115 [2024-11-19 02:00:12.872656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:126800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.115 [2024-11-19 02:00:12.872671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:21.115 [2024-11-19 02:00:12.872698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:126808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.115 [2024-11-19 02:00:12.872714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:21.115 [2024-11-19 02:00:12.872761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:126816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.115 [2024-11-19 02:00:12.872782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:21.115 [2024-11-19 02:00:12.872810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:126824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.115 [2024-11-19 02:00:12.872825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:21.115 [2024-11-19 02:00:12.872852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:126832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.115 [2024-11-19 02:00:12.872867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:21.115 [2024-11-19 02:00:12.872893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.115 [2024-11-19 02:00:12.872908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:21.115 [2024-11-19 02:00:12.872949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:126848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.115 [2024-11-19 02:00:12.872963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:21.115 [2024-11-19 02:00:12.872989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:126856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.115 [2024-11-19 02:00:12.873003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:21.115 [2024-11-19 02:00:12.873028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:126864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.115 [2024-11-19 02:00:12.873042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:21.115 [2024-11-19 02:00:12.873082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:126872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.115 [2024-11-19 02:00:12.873105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:21.115 [2024-11-19 02:00:12.873135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:126880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.115 [2024-11-19 02:00:12.873151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:21.115 [2024-11-19 02:00:12.873176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:126888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.115 [2024-11-19 02:00:12.873191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:21.115 [2024-11-19 02:00:12.873215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:126896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.115 [2024-11-19 02:00:12.873229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:21.115 [2024-11-19 02:00:12.873254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:126904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.115 [2024-11-19 02:00:12.873268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:21.115 [2024-11-19 02:00:12.873293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:126912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.115 [2024-11-19 02:00:12.873309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:21.115 9787.53 IOPS, 38.23 MiB/s [2024-11-19T02:00:31.730Z] 9175.81 IOPS, 35.84 MiB/s [2024-11-19T02:00:31.730Z] 8636.06 IOPS, 33.73 MiB/s [2024-11-19T02:00:31.730Z] 8156.28 IOPS, 31.86 MiB/s [2024-11-19T02:00:31.730Z] 7898.58 IOPS, 30.85 MiB/s [2024-11-19T02:00:31.730Z] 8017.45 IOPS, 31.32 MiB/s [2024-11-19T02:00:31.730Z] 8126.52 IOPS, 31.74 MiB/s [2024-11-19T02:00:31.730Z] 8395.00 IOPS, 32.79 MiB/s [2024-11-19T02:00:31.730Z] 8644.57 IOPS, 33.77 MiB/s [2024-11-19T02:00:31.730Z] 8853.46 IOPS, 34.58 MiB/s [2024-11-19T02:00:31.730Z] 8933.92 IOPS, 34.90 MiB/s [2024-11-19T02:00:31.730Z] 8985.38 IOPS, 35.10 MiB/s [2024-11-19T02:00:31.730Z] 9027.70 IOPS, 35.26 MiB/s [2024-11-19T02:00:31.730Z] 9153.71 IOPS, 35.76 MiB/s [2024-11-19T02:00:31.730Z] 9321.52 IOPS, 36.41 MiB/s [2024-11-19T02:00:31.730Z] 9469.37 IOPS, 36.99 MiB/s [2024-11-19T02:00:31.730Z] [2024-11-19 02:00:28.716938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:103592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.115 [2024-11-19 02:00:28.716998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:21.115 [2024-11-19 02:00:28.717062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.115 [2024-11-19 02:00:28.717082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:21.115 [2024-11-19 02:00:28.717103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:103624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.115 [2024-11-19 02:00:28.717117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:21.115 [2024-11-19 02:00:28.717136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:103640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.115 [2024-11-19 02:00:28.717149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:21.115 [2024-11-19 02:00:28.717167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:103656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.115 [2024-11-19 02:00:28.717180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:21.115 [2024-11-19 02:00:28.717200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:103104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.115 [2024-11-19 02:00:28.717240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:21.116 [2024-11-19 02:00:28.717262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.116 [2024-11-19 02:00:28.717275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:21.116 [2024-11-19 02:00:28.717294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.116 [2024-11-19 02:00:28.717307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:21.116 [2024-11-19 02:00:28.717325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:103128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.116 [2024-11-19 02:00:28.717338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:21.116 [2024-11-19 02:00:28.717357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.116 [2024-11-19 02:00:28.717370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:21.116 [2024-11-19 02:00:28.717389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.116 [2024-11-19 02:00:28.717402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:21.116 [2024-11-19 02:00:28.717421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.116 [2024-11-19 02:00:28.717433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:21.116 [2024-11-19 02:00:28.717452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.116 [2024-11-19 02:00:28.717465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:21.116 [2024-11-19 02:00:28.717483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.116 [2024-11-19 02:00:28.717496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:21.116 [2024-11-19 02:00:28.717529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.116 [2024-11-19 02:00:28.717561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:21.116 [2024-11-19 02:00:28.717582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.116 [2024-11-19 02:00:28.717595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:21.116 [2024-11-19 02:00:28.717614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:103736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.116 [2024-11-19 02:00:28.717627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:21.116 [2024-11-19 02:00:28.717649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.116 [2024-11-19 02:00:28.717673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:21.116 [2024-11-19 02:00:28.717695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.116 [2024-11-19 02:00:28.717709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:21.116 [2024-11-19 02:00:28.717728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.116 [2024-11-19 02:00:28.717742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:21.116 [2024-11-19 02:00:28.717761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:103304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.116 [2024-11-19 02:00:28.717774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.116 [2024-11-19 02:00:28.717793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:103336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.116 [2024-11-19 02:00:28.717807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:21.116 [2024-11-19 02:00:28.717826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:103768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.116 [2024-11-19 02:00:28.717840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:21.116 [2024-11-19 02:00:28.717859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.116 [2024-11-19 02:00:28.717873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:21.116 [2024-11-19 02:00:28.717907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.116 [2024-11-19 02:00:28.717947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:21.116 [2024-11-19 02:00:28.717968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.116 [2024-11-19 02:00:28.717982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:21.116 [2024-11-19 02:00:28.718002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:103816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.116 [2024-11-19 02:00:28.718015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:21.116 [2024-11-19 02:00:28.718035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:103832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.116 [2024-11-19 02:00:28.718049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:21.116 [2024-11-19 02:00:28.718068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:103848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.116 [2024-11-19 02:00:28.718082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:21.116 [2024-11-19 02:00:28.718101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:103864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.116 [2024-11-19 02:00:28.718122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:21.116 [2024-11-19 02:00:28.718143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:103880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.116 [2024-11-19 02:00:28.718157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:21.116 [2024-11-19 02:00:28.718177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:103232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.116 [2024-11-19 02:00:28.718190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:21.116 [2024-11-19 02:00:28.718209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:103408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.116 [2024-11-19 02:00:28.718223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:21.116 [2024-11-19 02:00:28.718243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:103440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.116 [2024-11-19 02:00:28.718257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:21.116 [2024-11-19 02:00:28.718290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.116 [2024-11-19 02:00:28.718303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:21.116 [2024-11-19 02:00:28.718322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.116 [2024-11-19 02:00:28.718335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:21.116 [2024-11-19 02:00:28.718354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.116 [2024-11-19 02:00:28.718367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:21.116 [2024-11-19 02:00:28.718385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.116 [2024-11-19 02:00:28.718398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:21.116 [2024-11-19 02:00:28.718417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:103504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.117 [2024-11-19 02:00:28.718430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:21.117 [2024-11-19 02:00:28.718449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:103536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.117 [2024-11-19 02:00:28.718462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:21.117 [2024-11-19 02:00:28.718481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.117 [2024-11-19 02:00:28.718494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:21.117 [2024-11-19 02:00:28.718513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:103264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.117 [2024-11-19 02:00:28.718536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:21.117 [2024-11-19 02:00:28.718566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:103296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.117 [2024-11-19 02:00:28.718580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:21.117 [2024-11-19 02:00:28.718599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:103328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.117 [2024-11-19 02:00:28.718612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:21.117 [2024-11-19 02:00:28.718631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.117 [2024-11-19 02:00:28.718644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:21.117 [2024-11-19 02:00:28.718663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.117 [2024-11-19 02:00:28.718676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:21.117 [2024-11-19 02:00:28.718695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.117 [2024-11-19 02:00:28.718708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:21.117 [2024-11-19 02:00:28.718727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:103984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.117 [2024-11-19 02:00:28.718740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:21.117 [2024-11-19 02:00:28.718777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:104000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.117 [2024-11-19 02:00:28.718796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:21.117 [2024-11-19 02:00:28.718817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:103360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.117 [2024-11-19 02:00:28.718831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:21.117 [2024-11-19 02:00:28.718850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:103400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.117 [2024-11-19 02:00:28.718864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:21.117 [2024-11-19 02:00:28.718882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.117 [2024-11-19 02:00:28.718895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:21.117 [2024-11-19 02:00:28.718914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.117 [2024-11-19 02:00:28.718928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.117 [2024-11-19 02:00:28.719886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.117 [2024-11-19 02:00:28.719915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:21.117 [2024-11-19 02:00:28.719952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.117 [2024-11-19 02:00:28.719969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:21.117 [2024-11-19 02:00:28.719989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.117 [2024-11-19 02:00:28.720003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:21.117 [2024-11-19 02:00:28.720021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.117 [2024-11-19 02:00:28.720035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:21.117 [2024-11-19 02:00:28.720053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:104048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.117 [2024-11-19 02:00:28.720067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:21.117 [2024-11-19 02:00:28.720085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:104064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.117 [2024-11-19 02:00:28.720099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:21.117 [2024-11-19 02:00:28.720117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:104080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.117 [2024-11-19 02:00:28.720130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:21.117 [2024-11-19 02:00:28.720149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.117 [2024-11-19 02:00:28.720162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:21.117 [2024-11-19 02:00:28.720181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:103680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.117 [2024-11-19 02:00:28.720195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:21.117 [2024-11-19 02:00:28.720214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:103712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.117 [2024-11-19 02:00:28.720228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:21.117 [2024-11-19 02:00:28.720247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:103744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.117 [2024-11-19 02:00:28.720260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:21.117 [2024-11-19 02:00:28.720279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:104104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.117 [2024-11-19 02:00:28.720292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:21.117 [2024-11-19 02:00:28.720327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:104120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.117 [2024-11-19 02:00:28.720346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:21.117 [2024-11-19 02:00:28.720366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.117 [2024-11-19 02:00:28.720389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:21.117 [2024-11-19 02:00:28.720410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:104152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.117 [2024-11-19 02:00:28.720424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:21.117 [2024-11-19 02:00:28.720443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:103496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.117 [2024-11-19 02:00:28.720456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:21.117 [2024-11-19 02:00:28.720475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.117 [2024-11-19 02:00:28.720488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:21.117 [2024-11-19 02:00:28.720520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.117 [2024-11-19 02:00:28.720552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:21.117 [2024-11-19 02:00:28.720573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.117 [2024-11-19 02:00:28.720587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:21.117 [2024-11-19 02:00:28.720607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:104176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.117 [2024-11-19 02:00:28.720621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:21.117 9562.52 IOPS, 37.35 MiB/s [2024-11-19T02:00:31.732Z] 9594.19 IOPS, 37.48 MiB/s [2024-11-19T02:00:31.732Z] 9617.36 IOPS, 37.57 MiB/s [2024-11-19T02:00:31.732Z] Received shutdown signal, test time was about 33.178365 seconds 00:19:21.117 00:19:21.117 Latency(us) 00:19:21.117 [2024-11-19T02:00:31.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.117 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:21.117 Verification LBA range: start 0x0 length 0x4000 00:19:21.117 Nvme0n1 : 33.18 9618.67 37.57 0.00 0.00 13279.93 160.12 4026531.84 00:19:21.117 [2024-11-19T02:00:31.732Z] =================================================================================================================== 00:19:21.117 [2024-11-19T02:00:31.732Z] Total : 9618.67 37.57 0.00 0.00 13279.93 160.12 4026531.84 00:19:21.118 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:21.377 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:19:21.377 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:21.377 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:19:21.377 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:21.377 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:19:21.377 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:21.377 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:19:21.377 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:21.377 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:21.377 rmmod nvme_tcp 00:19:21.377 rmmod nvme_fabrics 00:19:21.377 rmmod nvme_keyring 00:19:21.377 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:21.377 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:19:21.377 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:19:21.377 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 90999 ']' 00:19:21.377 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 90999 00:19:21.377 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 90999 ']' 00:19:21.377 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 90999 00:19:21.377 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:19:21.377 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:21.377 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90999 00:19:21.377 killing process with pid 90999 00:19:21.377 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:21.377 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:21.377 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90999' 00:19:21.377 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 90999 00:19:21.377 02:00:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 90999 00:19:21.636 02:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:21.636 02:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:21.636 02:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:21.636 02:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:19:21.636 02:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:19:21.636 02:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:21.636 02:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:19:21.636 02:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:21.636 02:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:21.636 02:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:21.636 02:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:21.636 02:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:21.636 02:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:21.636 02:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:21.636 02:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:21.636 02:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:21.636 02:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:21.636 02:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:21.636 02:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:21.636 02:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:21.636 02:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:21.636 02:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:21.636 02:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:21.636 02:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.636 02:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:21.636 02:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:19:21.896 00:19:21.896 real 0m38.439s 00:19:21.896 user 2m3.767s 00:19:21.896 sys 0m11.346s 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:21.896 ************************************ 00:19:21.896 END TEST nvmf_host_multipath_status 00:19:21.896 ************************************ 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.896 ************************************ 00:19:21.896 START TEST nvmf_discovery_remove_ifc 00:19:21.896 ************************************ 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:21.896 * Looking for test storage... 00:19:21.896 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:21.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.896 --rc genhtml_branch_coverage=1 00:19:21.896 --rc genhtml_function_coverage=1 00:19:21.896 --rc genhtml_legend=1 00:19:21.896 --rc geninfo_all_blocks=1 00:19:21.896 --rc geninfo_unexecuted_blocks=1 00:19:21.896 00:19:21.896 ' 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:21.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.896 --rc genhtml_branch_coverage=1 00:19:21.896 --rc genhtml_function_coverage=1 00:19:21.896 --rc genhtml_legend=1 00:19:21.896 --rc geninfo_all_blocks=1 00:19:21.896 --rc geninfo_unexecuted_blocks=1 00:19:21.896 00:19:21.896 ' 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:21.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.896 --rc genhtml_branch_coverage=1 00:19:21.896 --rc genhtml_function_coverage=1 00:19:21.896 --rc genhtml_legend=1 00:19:21.896 --rc geninfo_all_blocks=1 00:19:21.896 --rc geninfo_unexecuted_blocks=1 00:19:21.896 00:19:21.896 ' 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:21.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.896 --rc genhtml_branch_coverage=1 00:19:21.896 --rc genhtml_function_coverage=1 00:19:21.896 --rc genhtml_legend=1 00:19:21.896 --rc geninfo_all_blocks=1 00:19:21.896 --rc geninfo_unexecuted_blocks=1 00:19:21.896 00:19:21.896 ' 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:19:21.896 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:22.156 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:22.156 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:22.156 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:22.156 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:22.156 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:22.156 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:22.156 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:22.156 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:22.156 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:22.156 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:19:22.156 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:19:22.156 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:22.156 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:22.156 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:22.156 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:22.156 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:22.156 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:19:22.156 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:22.156 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:22.156 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:22.156 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.156 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.156 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.156 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:19:22.156 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.156 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:22.157 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:22.157 Cannot find device "nvmf_init_br" 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:22.157 Cannot find device "nvmf_init_br2" 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:22.157 Cannot find device "nvmf_tgt_br" 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:22.157 Cannot find device "nvmf_tgt_br2" 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:22.157 Cannot find device "nvmf_init_br" 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:22.157 Cannot find device "nvmf_init_br2" 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:22.157 Cannot find device "nvmf_tgt_br" 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:22.157 Cannot find device "nvmf_tgt_br2" 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:22.157 Cannot find device "nvmf_br" 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:22.157 Cannot find device "nvmf_init_if" 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:22.157 Cannot find device "nvmf_init_if2" 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:22.157 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:22.157 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:22.157 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:22.417 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:22.417 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:22.417 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:22.417 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:22.417 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:22.417 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:22.417 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:22.417 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:22.417 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:22.417 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:22.417 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:22.417 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:22.417 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:22.417 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:22.418 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:22.418 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:19:22.418 00:19:22.418 --- 10.0.0.3 ping statistics --- 00:19:22.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.418 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:22.418 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:22.418 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:19:22.418 00:19:22.418 --- 10.0.0.4 ping statistics --- 00:19:22.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.418 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:22.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:22.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:19:22.418 00:19:22.418 --- 10.0.0.1 ping statistics --- 00:19:22.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.418 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:22.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:22.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:19:22.418 00:19:22.418 --- 10.0.0.2 ping statistics --- 00:19:22.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.418 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=91871 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 91871 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 91871 ']' 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.418 02:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:22.418 [2024-11-19 02:00:33.027236] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:19:22.418 [2024-11-19 02:00:33.027318] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.677 [2024-11-19 02:00:33.159280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.677 [2024-11-19 02:00:33.177409] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:22.677 [2024-11-19 02:00:33.177471] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:22.677 [2024-11-19 02:00:33.177480] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:22.677 [2024-11-19 02:00:33.177487] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:22.677 [2024-11-19 02:00:33.177493] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:22.677 [2024-11-19 02:00:33.177798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.677 [2024-11-19 02:00:33.205861] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:23.615 02:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.615 02:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:19:23.615 02:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:23.615 02:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:23.615 02:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:23.615 02:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:23.615 02:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:19:23.615 02:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.615 02:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:23.615 [2024-11-19 02:00:34.029247] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:23.615 [2024-11-19 02:00:34.037347] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:19:23.615 null0 00:19:23.615 [2024-11-19 02:00:34.069287] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:23.615 02:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.615 02:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=91903 00:19:23.615 02:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:19:23.615 02:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 91903 /tmp/host.sock 00:19:23.615 02:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 91903 ']' 00:19:23.615 02:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:19:23.615 02:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:23.615 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:23.615 02:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:23.615 02:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:23.615 02:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:23.615 [2024-11-19 02:00:34.149330] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:19:23.615 [2024-11-19 02:00:34.149426] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91903 ] 00:19:23.874 [2024-11-19 02:00:34.301622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.874 [2024-11-19 02:00:34.324891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.874 02:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.874 02:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:19:23.874 02:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:23.874 02:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:19:23.874 02:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.874 02:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:23.874 02:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.874 02:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:19:23.874 02:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.874 02:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:23.874 [2024-11-19 02:00:34.439425] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:23.874 02:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.874 02:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:19:23.874 02:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.874 02:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:25.252 [2024-11-19 02:00:35.481955] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:25.252 [2024-11-19 02:00:35.482000] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:25.252 [2024-11-19 02:00:35.482018] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:25.252 [2024-11-19 02:00:35.487995] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:19:25.252 [2024-11-19 02:00:35.542341] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:19:25.252 [2024-11-19 02:00:35.543201] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xd813a0:1 started. 00:19:25.252 [2024-11-19 02:00:35.544748] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:25.252 [2024-11-19 02:00:35.544814] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:25.252 [2024-11-19 02:00:35.544839] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:25.252 [2024-11-19 02:00:35.544854] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:25.252 [2024-11-19 02:00:35.544873] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:25.252 02:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.252 02:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:19:25.252 02:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:25.252 02:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:25.252 02:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:25.252 02:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.252 02:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:25.252 02:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:25.252 [2024-11-19 02:00:35.550831] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xd813a0 was disconnected and freed. delete nvme_qpair. 00:19:25.252 02:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:25.252 02:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.252 02:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:19:25.252 02:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:19:25.252 02:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:19:25.252 02:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:19:25.252 02:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:25.252 02:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:25.252 02:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.252 02:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:25.252 02:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:25.252 02:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:25.252 02:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:25.252 02:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.252 02:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:25.252 02:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:26.199 02:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:26.199 02:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:26.199 02:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.199 02:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:26.199 02:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:26.199 02:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:26.199 02:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:26.199 02:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.199 02:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:26.199 02:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:27.216 02:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:27.216 02:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:27.216 02:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.216 02:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:27.216 02:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:27.216 02:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:27.216 02:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:27.216 02:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.216 02:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:27.216 02:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:28.595 02:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:28.595 02:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:28.595 02:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:28.595 02:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:28.595 02:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.595 02:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:28.595 02:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:28.595 02:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.595 02:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:28.595 02:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:29.531 02:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:29.531 02:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:29.531 02:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:29.531 02:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:29.531 02:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.531 02:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:29.531 02:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:29.531 02:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.531 02:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:29.531 02:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:30.469 02:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:30.469 02:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:30.469 02:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.469 02:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:30.469 02:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:30.469 02:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:30.469 02:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:30.469 02:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.469 [2024-11-19 02:00:40.972750] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:19:30.469 [2024-11-19 02:00:40.972805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.469 [2024-11-19 02:00:40.972818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.469 [2024-11-19 02:00:40.972829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.469 [2024-11-19 02:00:40.972837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.469 [2024-11-19 02:00:40.972845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.469 [2024-11-19 02:00:40.972853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.469 [2024-11-19 02:00:40.972861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.469 [2024-11-19 02:00:40.972869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.469 [2024-11-19 02:00:40.972878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.469 [2024-11-19 02:00:40.972885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.469 [2024-11-19 02:00:40.972893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5c8c0 is same with the state(6) to be set 00:19:30.469 02:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:30.469 02:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:30.469 [2024-11-19 02:00:40.982765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd5c8c0 (9): Bad file descriptor 00:19:30.469 [2024-11-19 02:00:40.992764] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:19:30.469 [2024-11-19 02:00:40.992797] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:19:30.469 [2024-11-19 02:00:40.992807] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:30.469 [2024-11-19 02:00:40.992813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:30.469 [2024-11-19 02:00:40.992846] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:31.405 02:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:31.405 02:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:31.405 02:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:31.405 02:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.405 02:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:31.405 02:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:31.405 02:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:31.665 [2024-11-19 02:00:42.048590] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:19:31.665 [2024-11-19 02:00:42.048680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd5c8c0 with addr=10.0.0.3, port=4420 00:19:31.665 [2024-11-19 02:00:42.048703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5c8c0 is same with the state(6) to be set 00:19:31.665 [2024-11-19 02:00:42.048746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd5c8c0 (9): Bad file descriptor 00:19:31.665 [2024-11-19 02:00:42.049471] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:19:31.665 [2024-11-19 02:00:42.049581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:31.665 [2024-11-19 02:00:42.049602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:31.665 [2024-11-19 02:00:42.049621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:31.665 [2024-11-19 02:00:42.049648] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:31.665 [2024-11-19 02:00:42.049660] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:31.665 [2024-11-19 02:00:42.049669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:31.665 [2024-11-19 02:00:42.049688] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:31.665 [2024-11-19 02:00:42.049698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:31.665 02:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.665 02:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:31.665 02:00:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:32.603 [2024-11-19 02:00:43.049746] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:32.603 [2024-11-19 02:00:43.049792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:32.603 [2024-11-19 02:00:43.049814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:32.603 [2024-11-19 02:00:43.049824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:32.603 [2024-11-19 02:00:43.049833] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:19:32.603 [2024-11-19 02:00:43.049840] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:32.603 [2024-11-19 02:00:43.049846] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:32.603 [2024-11-19 02:00:43.049850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:32.603 [2024-11-19 02:00:43.049879] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:19:32.603 [2024-11-19 02:00:43.049923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.603 [2024-11-19 02:00:43.049953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.603 [2024-11-19 02:00:43.049965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.603 [2024-11-19 02:00:43.049973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.603 [2024-11-19 02:00:43.049981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.603 [2024-11-19 02:00:43.049989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.603 [2024-11-19 02:00:43.049998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.603 [2024-11-19 02:00:43.050005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.603 [2024-11-19 02:00:43.050014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:32.603 [2024-11-19 02:00:43.050022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:32.603 [2024-11-19 02:00:43.050030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:19:32.603 [2024-11-19 02:00:43.050079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd4ae40 (9): Bad file descriptor 00:19:32.603 [2024-11-19 02:00:43.051059] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:19:32.603 [2024-11-19 02:00:43.051100] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:19:32.603 02:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:32.603 02:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:32.603 02:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.603 02:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:32.603 02:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:32.603 02:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:32.603 02:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:32.603 02:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.603 02:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:19:32.603 02:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:32.603 02:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:32.603 02:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:19:32.603 02:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:32.603 02:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:32.603 02:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:32.603 02:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:32.603 02:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.603 02:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:32.604 02:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:32.604 02:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.604 02:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:32.604 02:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:33.985 02:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:33.985 02:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:33.985 02:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:33.985 02:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.985 02:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:33.985 02:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:33.985 02:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:33.985 02:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.985 02:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:33.985 02:00:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:34.553 [2024-11-19 02:00:45.056687] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:34.553 [2024-11-19 02:00:45.056711] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:34.553 [2024-11-19 02:00:45.056743] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:34.553 [2024-11-19 02:00:45.062721] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:19:34.553 [2024-11-19 02:00:45.117003] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:19:34.553 [2024-11-19 02:00:45.117681] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xd55180:1 started. 00:19:34.553 [2024-11-19 02:00:45.119009] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:34.553 [2024-11-19 02:00:45.119066] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:34.553 [2024-11-19 02:00:45.119089] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:34.553 [2024-11-19 02:00:45.119104] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:19:34.553 [2024-11-19 02:00:45.119112] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:34.553 [2024-11-19 02:00:45.125337] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xd55180 was disconnected and freed. delete nvme_qpair. 00:19:34.813 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:34.813 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:34.813 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:34.813 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:34.813 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.813 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:34.813 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:34.813 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.813 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:19:34.813 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:19:34.813 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 91903 00:19:34.813 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 91903 ']' 00:19:34.813 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 91903 00:19:34.813 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:19:34.813 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:34.813 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91903 00:19:34.813 killing process with pid 91903 00:19:34.813 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:34.813 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:34.813 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91903' 00:19:34.813 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 91903 00:19:34.813 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 91903 00:19:35.072 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:19:35.072 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:35.072 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:19:35.072 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:35.072 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:19:35.072 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:35.072 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:35.072 rmmod nvme_tcp 00:19:35.072 rmmod nvme_fabrics 00:19:35.072 rmmod nvme_keyring 00:19:35.072 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:35.072 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:19:35.072 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:19:35.072 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 91871 ']' 00:19:35.072 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 91871 00:19:35.072 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 91871 ']' 00:19:35.072 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 91871 00:19:35.072 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:19:35.073 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:35.073 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91871 00:19:35.073 killing process with pid 91871 00:19:35.073 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:35.073 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:35.073 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91871' 00:19:35.073 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 91871 00:19:35.073 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 91871 00:19:35.332 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:35.332 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:35.332 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:35.332 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:19:35.332 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:19:35.332 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:35.332 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:19:35.332 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:35.332 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:35.332 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:35.332 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:35.332 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:35.332 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:35.332 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:35.332 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:35.332 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:35.332 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:35.332 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:35.332 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:35.332 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:35.332 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:35.332 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:35.332 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:35.332 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.332 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:35.332 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.591 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:19:35.591 00:19:35.591 real 0m13.664s 00:19:35.591 user 0m22.970s 00:19:35.591 sys 0m2.365s 00:19:35.591 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:35.591 02:00:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:35.591 ************************************ 00:19:35.591 END TEST nvmf_discovery_remove_ifc 00:19:35.591 ************************************ 00:19:35.591 02:00:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:35.591 02:00:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:35.591 02:00:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:35.591 02:00:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.591 ************************************ 00:19:35.591 START TEST nvmf_identify_kernel_target 00:19:35.591 ************************************ 00:19:35.592 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:35.592 * Looking for test storage... 00:19:35.592 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:35.592 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:35.592 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:19:35.592 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:35.592 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:35.592 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:35.592 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:35.592 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:35.592 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:35.592 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:35.592 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:35.592 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:35.592 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:35.592 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:35.592 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:35.592 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:35.592 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:19:35.592 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:19:35.592 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:35.592 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:35.592 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:19:35.592 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:19:35.592 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:35.592 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:19:35.592 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:35.852 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:19:35.852 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:19:35.852 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:35.852 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:19:35.852 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:35.852 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:35.852 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:35.852 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:19:35.852 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:35.852 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:35.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.852 --rc genhtml_branch_coverage=1 00:19:35.852 --rc genhtml_function_coverage=1 00:19:35.852 --rc genhtml_legend=1 00:19:35.852 --rc geninfo_all_blocks=1 00:19:35.852 --rc geninfo_unexecuted_blocks=1 00:19:35.852 00:19:35.852 ' 00:19:35.852 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:35.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.852 --rc genhtml_branch_coverage=1 00:19:35.852 --rc genhtml_function_coverage=1 00:19:35.852 --rc genhtml_legend=1 00:19:35.852 --rc geninfo_all_blocks=1 00:19:35.852 --rc geninfo_unexecuted_blocks=1 00:19:35.852 00:19:35.852 ' 00:19:35.852 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:35.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.852 --rc genhtml_branch_coverage=1 00:19:35.852 --rc genhtml_function_coverage=1 00:19:35.852 --rc genhtml_legend=1 00:19:35.852 --rc geninfo_all_blocks=1 00:19:35.852 --rc geninfo_unexecuted_blocks=1 00:19:35.852 00:19:35.852 ' 00:19:35.852 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:35.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.852 --rc genhtml_branch_coverage=1 00:19:35.852 --rc genhtml_function_coverage=1 00:19:35.852 --rc genhtml_legend=1 00:19:35.852 --rc geninfo_all_blocks=1 00:19:35.853 --rc geninfo_unexecuted_blocks=1 00:19:35.853 00:19:35.853 ' 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:35.853 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:35.853 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:35.854 Cannot find device "nvmf_init_br" 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:35.854 Cannot find device "nvmf_init_br2" 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:35.854 Cannot find device "nvmf_tgt_br" 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:35.854 Cannot find device "nvmf_tgt_br2" 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:35.854 Cannot find device "nvmf_init_br" 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:35.854 Cannot find device "nvmf_init_br2" 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:35.854 Cannot find device "nvmf_tgt_br" 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:35.854 Cannot find device "nvmf_tgt_br2" 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:35.854 Cannot find device "nvmf_br" 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:35.854 Cannot find device "nvmf_init_if" 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:35.854 Cannot find device "nvmf_init_if2" 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:35.854 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:35.854 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:35.854 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:36.113 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:36.113 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:36.113 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:36.113 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:36.113 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:36.113 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:36.113 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:36.113 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:36.113 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:36.113 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:36.113 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:36.113 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:36.113 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:36.114 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:36.114 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:19:36.114 00:19:36.114 --- 10.0.0.3 ping statistics --- 00:19:36.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.114 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:36.114 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:36.114 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:19:36.114 00:19:36.114 --- 10.0.0.4 ping statistics --- 00:19:36.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.114 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:36.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:36.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:19:36.114 00:19:36.114 --- 10.0.0.1 ping statistics --- 00:19:36.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.114 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:36.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:36.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:19:36.114 00:19:36.114 --- 10.0.0.2 ping statistics --- 00:19:36.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.114 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:36.114 02:00:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:36.373 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:36.373 Waiting for block devices as requested 00:19:36.632 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:36.632 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:36.632 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:36.632 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:36.633 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:19:36.633 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:36.633 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:36.633 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:36.633 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:19:36.633 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:19:36.633 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:36.633 No valid GPT data, bailing 00:19:36.892 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:36.892 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:36.892 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:36.892 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:19:36.892 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:36.892 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:36.892 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:19:36.892 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:19:36.892 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:36.892 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:36.892 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:36.893 No valid GPT data, bailing 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:36.893 No valid GPT data, bailing 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:36.893 No valid GPT data, bailing 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:19:36.893 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:37.153 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid=7cdc77f7-6c10-48d3-83fa-703a290bdf89 -a 10.0.0.1 -t tcp -s 4420 00:19:37.153 00:19:37.153 Discovery Log Number of Records 2, Generation counter 2 00:19:37.153 =====Discovery Log Entry 0====== 00:19:37.153 trtype: tcp 00:19:37.153 adrfam: ipv4 00:19:37.153 subtype: current discovery subsystem 00:19:37.153 treq: not specified, sq flow control disable supported 00:19:37.153 portid: 1 00:19:37.153 trsvcid: 4420 00:19:37.153 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:37.153 traddr: 10.0.0.1 00:19:37.153 eflags: none 00:19:37.153 sectype: none 00:19:37.153 =====Discovery Log Entry 1====== 00:19:37.153 trtype: tcp 00:19:37.153 adrfam: ipv4 00:19:37.153 subtype: nvme subsystem 00:19:37.153 treq: not specified, sq flow control disable supported 00:19:37.153 portid: 1 00:19:37.153 trsvcid: 4420 00:19:37.153 subnqn: nqn.2016-06.io.spdk:testnqn 00:19:37.153 traddr: 10.0.0.1 00:19:37.153 eflags: none 00:19:37.153 sectype: none 00:19:37.153 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:19:37.153 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:19:37.153 ===================================================== 00:19:37.153 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:37.153 ===================================================== 00:19:37.153 Controller Capabilities/Features 00:19:37.153 ================================ 00:19:37.153 Vendor ID: 0000 00:19:37.153 Subsystem Vendor ID: 0000 00:19:37.153 Serial Number: 8a088d09759b1cb09200 00:19:37.153 Model Number: Linux 00:19:37.153 Firmware Version: 6.8.9-20 00:19:37.153 Recommended Arb Burst: 0 00:19:37.153 IEEE OUI Identifier: 00 00 00 00:19:37.153 Multi-path I/O 00:19:37.154 May have multiple subsystem ports: No 00:19:37.154 May have multiple controllers: No 00:19:37.154 Associated with SR-IOV VF: No 00:19:37.154 Max Data Transfer Size: Unlimited 00:19:37.154 Max Number of Namespaces: 0 00:19:37.154 Max Number of I/O Queues: 1024 00:19:37.154 NVMe Specification Version (VS): 1.3 00:19:37.154 NVMe Specification Version (Identify): 1.3 00:19:37.154 Maximum Queue Entries: 1024 00:19:37.154 Contiguous Queues Required: No 00:19:37.154 Arbitration Mechanisms Supported 00:19:37.154 Weighted Round Robin: Not Supported 00:19:37.154 Vendor Specific: Not Supported 00:19:37.154 Reset Timeout: 7500 ms 00:19:37.154 Doorbell Stride: 4 bytes 00:19:37.154 NVM Subsystem Reset: Not Supported 00:19:37.154 Command Sets Supported 00:19:37.154 NVM Command Set: Supported 00:19:37.154 Boot Partition: Not Supported 00:19:37.154 Memory Page Size Minimum: 4096 bytes 00:19:37.154 Memory Page Size Maximum: 4096 bytes 00:19:37.154 Persistent Memory Region: Not Supported 00:19:37.154 Optional Asynchronous Events Supported 00:19:37.154 Namespace Attribute Notices: Not Supported 00:19:37.154 Firmware Activation Notices: Not Supported 00:19:37.154 ANA Change Notices: Not Supported 00:19:37.154 PLE Aggregate Log Change Notices: Not Supported 00:19:37.154 LBA Status Info Alert Notices: Not Supported 00:19:37.154 EGE Aggregate Log Change Notices: Not Supported 00:19:37.154 Normal NVM Subsystem Shutdown event: Not Supported 00:19:37.154 Zone Descriptor Change Notices: Not Supported 00:19:37.154 Discovery Log Change Notices: Supported 00:19:37.154 Controller Attributes 00:19:37.154 128-bit Host Identifier: Not Supported 00:19:37.154 Non-Operational Permissive Mode: Not Supported 00:19:37.154 NVM Sets: Not Supported 00:19:37.154 Read Recovery Levels: Not Supported 00:19:37.154 Endurance Groups: Not Supported 00:19:37.154 Predictable Latency Mode: Not Supported 00:19:37.154 Traffic Based Keep ALive: Not Supported 00:19:37.154 Namespace Granularity: Not Supported 00:19:37.154 SQ Associations: Not Supported 00:19:37.154 UUID List: Not Supported 00:19:37.154 Multi-Domain Subsystem: Not Supported 00:19:37.154 Fixed Capacity Management: Not Supported 00:19:37.154 Variable Capacity Management: Not Supported 00:19:37.154 Delete Endurance Group: Not Supported 00:19:37.154 Delete NVM Set: Not Supported 00:19:37.154 Extended LBA Formats Supported: Not Supported 00:19:37.154 Flexible Data Placement Supported: Not Supported 00:19:37.154 00:19:37.154 Controller Memory Buffer Support 00:19:37.154 ================================ 00:19:37.154 Supported: No 00:19:37.154 00:19:37.154 Persistent Memory Region Support 00:19:37.154 ================================ 00:19:37.154 Supported: No 00:19:37.154 00:19:37.154 Admin Command Set Attributes 00:19:37.154 ============================ 00:19:37.154 Security Send/Receive: Not Supported 00:19:37.154 Format NVM: Not Supported 00:19:37.154 Firmware Activate/Download: Not Supported 00:19:37.154 Namespace Management: Not Supported 00:19:37.154 Device Self-Test: Not Supported 00:19:37.154 Directives: Not Supported 00:19:37.154 NVMe-MI: Not Supported 00:19:37.154 Virtualization Management: Not Supported 00:19:37.154 Doorbell Buffer Config: Not Supported 00:19:37.154 Get LBA Status Capability: Not Supported 00:19:37.154 Command & Feature Lockdown Capability: Not Supported 00:19:37.154 Abort Command Limit: 1 00:19:37.154 Async Event Request Limit: 1 00:19:37.154 Number of Firmware Slots: N/A 00:19:37.154 Firmware Slot 1 Read-Only: N/A 00:19:37.154 Firmware Activation Without Reset: N/A 00:19:37.154 Multiple Update Detection Support: N/A 00:19:37.154 Firmware Update Granularity: No Information Provided 00:19:37.154 Per-Namespace SMART Log: No 00:19:37.154 Asymmetric Namespace Access Log Page: Not Supported 00:19:37.154 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:37.154 Command Effects Log Page: Not Supported 00:19:37.154 Get Log Page Extended Data: Supported 00:19:37.154 Telemetry Log Pages: Not Supported 00:19:37.154 Persistent Event Log Pages: Not Supported 00:19:37.154 Supported Log Pages Log Page: May Support 00:19:37.154 Commands Supported & Effects Log Page: Not Supported 00:19:37.154 Feature Identifiers & Effects Log Page:May Support 00:19:37.154 NVMe-MI Commands & Effects Log Page: May Support 00:19:37.154 Data Area 4 for Telemetry Log: Not Supported 00:19:37.154 Error Log Page Entries Supported: 1 00:19:37.154 Keep Alive: Not Supported 00:19:37.154 00:19:37.154 NVM Command Set Attributes 00:19:37.154 ========================== 00:19:37.154 Submission Queue Entry Size 00:19:37.154 Max: 1 00:19:37.154 Min: 1 00:19:37.154 Completion Queue Entry Size 00:19:37.154 Max: 1 00:19:37.154 Min: 1 00:19:37.154 Number of Namespaces: 0 00:19:37.154 Compare Command: Not Supported 00:19:37.154 Write Uncorrectable Command: Not Supported 00:19:37.154 Dataset Management Command: Not Supported 00:19:37.154 Write Zeroes Command: Not Supported 00:19:37.154 Set Features Save Field: Not Supported 00:19:37.154 Reservations: Not Supported 00:19:37.154 Timestamp: Not Supported 00:19:37.154 Copy: Not Supported 00:19:37.154 Volatile Write Cache: Not Present 00:19:37.154 Atomic Write Unit (Normal): 1 00:19:37.154 Atomic Write Unit (PFail): 1 00:19:37.154 Atomic Compare & Write Unit: 1 00:19:37.154 Fused Compare & Write: Not Supported 00:19:37.154 Scatter-Gather List 00:19:37.154 SGL Command Set: Supported 00:19:37.154 SGL Keyed: Not Supported 00:19:37.154 SGL Bit Bucket Descriptor: Not Supported 00:19:37.154 SGL Metadata Pointer: Not Supported 00:19:37.154 Oversized SGL: Not Supported 00:19:37.154 SGL Metadata Address: Not Supported 00:19:37.154 SGL Offset: Supported 00:19:37.154 Transport SGL Data Block: Not Supported 00:19:37.154 Replay Protected Memory Block: Not Supported 00:19:37.154 00:19:37.154 Firmware Slot Information 00:19:37.154 ========================= 00:19:37.154 Active slot: 0 00:19:37.154 00:19:37.154 00:19:37.154 Error Log 00:19:37.154 ========= 00:19:37.154 00:19:37.154 Active Namespaces 00:19:37.154 ================= 00:19:37.154 Discovery Log Page 00:19:37.154 ================== 00:19:37.154 Generation Counter: 2 00:19:37.154 Number of Records: 2 00:19:37.154 Record Format: 0 00:19:37.154 00:19:37.154 Discovery Log Entry 0 00:19:37.154 ---------------------- 00:19:37.154 Transport Type: 3 (TCP) 00:19:37.154 Address Family: 1 (IPv4) 00:19:37.154 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:37.154 Entry Flags: 00:19:37.154 Duplicate Returned Information: 0 00:19:37.154 Explicit Persistent Connection Support for Discovery: 0 00:19:37.154 Transport Requirements: 00:19:37.154 Secure Channel: Not Specified 00:19:37.154 Port ID: 1 (0x0001) 00:19:37.154 Controller ID: 65535 (0xffff) 00:19:37.154 Admin Max SQ Size: 32 00:19:37.154 Transport Service Identifier: 4420 00:19:37.154 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:37.154 Transport Address: 10.0.0.1 00:19:37.154 Discovery Log Entry 1 00:19:37.154 ---------------------- 00:19:37.154 Transport Type: 3 (TCP) 00:19:37.154 Address Family: 1 (IPv4) 00:19:37.154 Subsystem Type: 2 (NVM Subsystem) 00:19:37.154 Entry Flags: 00:19:37.154 Duplicate Returned Information: 0 00:19:37.154 Explicit Persistent Connection Support for Discovery: 0 00:19:37.154 Transport Requirements: 00:19:37.154 Secure Channel: Not Specified 00:19:37.154 Port ID: 1 (0x0001) 00:19:37.154 Controller ID: 65535 (0xffff) 00:19:37.154 Admin Max SQ Size: 32 00:19:37.155 Transport Service Identifier: 4420 00:19:37.155 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:19:37.155 Transport Address: 10.0.0.1 00:19:37.155 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:37.415 get_feature(0x01) failed 00:19:37.415 get_feature(0x02) failed 00:19:37.415 get_feature(0x04) failed 00:19:37.415 ===================================================== 00:19:37.415 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:37.415 ===================================================== 00:19:37.415 Controller Capabilities/Features 00:19:37.415 ================================ 00:19:37.415 Vendor ID: 0000 00:19:37.415 Subsystem Vendor ID: 0000 00:19:37.415 Serial Number: f47ffb16f6295ca41d66 00:19:37.415 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:19:37.415 Firmware Version: 6.8.9-20 00:19:37.415 Recommended Arb Burst: 6 00:19:37.415 IEEE OUI Identifier: 00 00 00 00:19:37.415 Multi-path I/O 00:19:37.415 May have multiple subsystem ports: Yes 00:19:37.415 May have multiple controllers: Yes 00:19:37.415 Associated with SR-IOV VF: No 00:19:37.415 Max Data Transfer Size: Unlimited 00:19:37.415 Max Number of Namespaces: 1024 00:19:37.415 Max Number of I/O Queues: 128 00:19:37.415 NVMe Specification Version (VS): 1.3 00:19:37.415 NVMe Specification Version (Identify): 1.3 00:19:37.415 Maximum Queue Entries: 1024 00:19:37.415 Contiguous Queues Required: No 00:19:37.415 Arbitration Mechanisms Supported 00:19:37.415 Weighted Round Robin: Not Supported 00:19:37.415 Vendor Specific: Not Supported 00:19:37.415 Reset Timeout: 7500 ms 00:19:37.415 Doorbell Stride: 4 bytes 00:19:37.415 NVM Subsystem Reset: Not Supported 00:19:37.415 Command Sets Supported 00:19:37.415 NVM Command Set: Supported 00:19:37.415 Boot Partition: Not Supported 00:19:37.415 Memory Page Size Minimum: 4096 bytes 00:19:37.415 Memory Page Size Maximum: 4096 bytes 00:19:37.415 Persistent Memory Region: Not Supported 00:19:37.415 Optional Asynchronous Events Supported 00:19:37.415 Namespace Attribute Notices: Supported 00:19:37.415 Firmware Activation Notices: Not Supported 00:19:37.415 ANA Change Notices: Supported 00:19:37.415 PLE Aggregate Log Change Notices: Not Supported 00:19:37.415 LBA Status Info Alert Notices: Not Supported 00:19:37.415 EGE Aggregate Log Change Notices: Not Supported 00:19:37.415 Normal NVM Subsystem Shutdown event: Not Supported 00:19:37.415 Zone Descriptor Change Notices: Not Supported 00:19:37.415 Discovery Log Change Notices: Not Supported 00:19:37.415 Controller Attributes 00:19:37.415 128-bit Host Identifier: Supported 00:19:37.415 Non-Operational Permissive Mode: Not Supported 00:19:37.415 NVM Sets: Not Supported 00:19:37.415 Read Recovery Levels: Not Supported 00:19:37.415 Endurance Groups: Not Supported 00:19:37.415 Predictable Latency Mode: Not Supported 00:19:37.415 Traffic Based Keep ALive: Supported 00:19:37.415 Namespace Granularity: Not Supported 00:19:37.415 SQ Associations: Not Supported 00:19:37.415 UUID List: Not Supported 00:19:37.415 Multi-Domain Subsystem: Not Supported 00:19:37.415 Fixed Capacity Management: Not Supported 00:19:37.415 Variable Capacity Management: Not Supported 00:19:37.415 Delete Endurance Group: Not Supported 00:19:37.415 Delete NVM Set: Not Supported 00:19:37.415 Extended LBA Formats Supported: Not Supported 00:19:37.415 Flexible Data Placement Supported: Not Supported 00:19:37.415 00:19:37.415 Controller Memory Buffer Support 00:19:37.415 ================================ 00:19:37.415 Supported: No 00:19:37.415 00:19:37.415 Persistent Memory Region Support 00:19:37.415 ================================ 00:19:37.415 Supported: No 00:19:37.415 00:19:37.415 Admin Command Set Attributes 00:19:37.415 ============================ 00:19:37.415 Security Send/Receive: Not Supported 00:19:37.415 Format NVM: Not Supported 00:19:37.415 Firmware Activate/Download: Not Supported 00:19:37.415 Namespace Management: Not Supported 00:19:37.415 Device Self-Test: Not Supported 00:19:37.415 Directives: Not Supported 00:19:37.415 NVMe-MI: Not Supported 00:19:37.415 Virtualization Management: Not Supported 00:19:37.415 Doorbell Buffer Config: Not Supported 00:19:37.415 Get LBA Status Capability: Not Supported 00:19:37.415 Command & Feature Lockdown Capability: Not Supported 00:19:37.415 Abort Command Limit: 4 00:19:37.415 Async Event Request Limit: 4 00:19:37.415 Number of Firmware Slots: N/A 00:19:37.415 Firmware Slot 1 Read-Only: N/A 00:19:37.415 Firmware Activation Without Reset: N/A 00:19:37.415 Multiple Update Detection Support: N/A 00:19:37.416 Firmware Update Granularity: No Information Provided 00:19:37.416 Per-Namespace SMART Log: Yes 00:19:37.416 Asymmetric Namespace Access Log Page: Supported 00:19:37.416 ANA Transition Time : 10 sec 00:19:37.416 00:19:37.416 Asymmetric Namespace Access Capabilities 00:19:37.416 ANA Optimized State : Supported 00:19:37.416 ANA Non-Optimized State : Supported 00:19:37.416 ANA Inaccessible State : Supported 00:19:37.416 ANA Persistent Loss State : Supported 00:19:37.416 ANA Change State : Supported 00:19:37.416 ANAGRPID is not changed : No 00:19:37.416 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:19:37.416 00:19:37.416 ANA Group Identifier Maximum : 128 00:19:37.416 Number of ANA Group Identifiers : 128 00:19:37.416 Max Number of Allowed Namespaces : 1024 00:19:37.416 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:19:37.416 Command Effects Log Page: Supported 00:19:37.416 Get Log Page Extended Data: Supported 00:19:37.416 Telemetry Log Pages: Not Supported 00:19:37.416 Persistent Event Log Pages: Not Supported 00:19:37.416 Supported Log Pages Log Page: May Support 00:19:37.416 Commands Supported & Effects Log Page: Not Supported 00:19:37.416 Feature Identifiers & Effects Log Page:May Support 00:19:37.416 NVMe-MI Commands & Effects Log Page: May Support 00:19:37.416 Data Area 4 for Telemetry Log: Not Supported 00:19:37.416 Error Log Page Entries Supported: 128 00:19:37.416 Keep Alive: Supported 00:19:37.416 Keep Alive Granularity: 1000 ms 00:19:37.416 00:19:37.416 NVM Command Set Attributes 00:19:37.416 ========================== 00:19:37.416 Submission Queue Entry Size 00:19:37.416 Max: 64 00:19:37.416 Min: 64 00:19:37.416 Completion Queue Entry Size 00:19:37.416 Max: 16 00:19:37.416 Min: 16 00:19:37.416 Number of Namespaces: 1024 00:19:37.416 Compare Command: Not Supported 00:19:37.416 Write Uncorrectable Command: Not Supported 00:19:37.416 Dataset Management Command: Supported 00:19:37.416 Write Zeroes Command: Supported 00:19:37.416 Set Features Save Field: Not Supported 00:19:37.416 Reservations: Not Supported 00:19:37.416 Timestamp: Not Supported 00:19:37.416 Copy: Not Supported 00:19:37.416 Volatile Write Cache: Present 00:19:37.416 Atomic Write Unit (Normal): 1 00:19:37.416 Atomic Write Unit (PFail): 1 00:19:37.416 Atomic Compare & Write Unit: 1 00:19:37.416 Fused Compare & Write: Not Supported 00:19:37.416 Scatter-Gather List 00:19:37.416 SGL Command Set: Supported 00:19:37.416 SGL Keyed: Not Supported 00:19:37.416 SGL Bit Bucket Descriptor: Not Supported 00:19:37.416 SGL Metadata Pointer: Not Supported 00:19:37.416 Oversized SGL: Not Supported 00:19:37.416 SGL Metadata Address: Not Supported 00:19:37.416 SGL Offset: Supported 00:19:37.416 Transport SGL Data Block: Not Supported 00:19:37.416 Replay Protected Memory Block: Not Supported 00:19:37.416 00:19:37.416 Firmware Slot Information 00:19:37.416 ========================= 00:19:37.416 Active slot: 0 00:19:37.416 00:19:37.416 Asymmetric Namespace Access 00:19:37.416 =========================== 00:19:37.416 Change Count : 0 00:19:37.416 Number of ANA Group Descriptors : 1 00:19:37.416 ANA Group Descriptor : 0 00:19:37.416 ANA Group ID : 1 00:19:37.416 Number of NSID Values : 1 00:19:37.416 Change Count : 0 00:19:37.416 ANA State : 1 00:19:37.416 Namespace Identifier : 1 00:19:37.416 00:19:37.416 Commands Supported and Effects 00:19:37.416 ============================== 00:19:37.416 Admin Commands 00:19:37.416 -------------- 00:19:37.416 Get Log Page (02h): Supported 00:19:37.416 Identify (06h): Supported 00:19:37.416 Abort (08h): Supported 00:19:37.416 Set Features (09h): Supported 00:19:37.416 Get Features (0Ah): Supported 00:19:37.416 Asynchronous Event Request (0Ch): Supported 00:19:37.416 Keep Alive (18h): Supported 00:19:37.416 I/O Commands 00:19:37.416 ------------ 00:19:37.416 Flush (00h): Supported 00:19:37.416 Write (01h): Supported LBA-Change 00:19:37.416 Read (02h): Supported 00:19:37.416 Write Zeroes (08h): Supported LBA-Change 00:19:37.416 Dataset Management (09h): Supported 00:19:37.416 00:19:37.416 Error Log 00:19:37.416 ========= 00:19:37.416 Entry: 0 00:19:37.416 Error Count: 0x3 00:19:37.416 Submission Queue Id: 0x0 00:19:37.416 Command Id: 0x5 00:19:37.416 Phase Bit: 0 00:19:37.416 Status Code: 0x2 00:19:37.416 Status Code Type: 0x0 00:19:37.416 Do Not Retry: 1 00:19:37.416 Error Location: 0x28 00:19:37.416 LBA: 0x0 00:19:37.416 Namespace: 0x0 00:19:37.416 Vendor Log Page: 0x0 00:19:37.416 ----------- 00:19:37.416 Entry: 1 00:19:37.416 Error Count: 0x2 00:19:37.416 Submission Queue Id: 0x0 00:19:37.416 Command Id: 0x5 00:19:37.416 Phase Bit: 0 00:19:37.416 Status Code: 0x2 00:19:37.416 Status Code Type: 0x0 00:19:37.416 Do Not Retry: 1 00:19:37.416 Error Location: 0x28 00:19:37.416 LBA: 0x0 00:19:37.416 Namespace: 0x0 00:19:37.416 Vendor Log Page: 0x0 00:19:37.416 ----------- 00:19:37.416 Entry: 2 00:19:37.416 Error Count: 0x1 00:19:37.416 Submission Queue Id: 0x0 00:19:37.416 Command Id: 0x4 00:19:37.416 Phase Bit: 0 00:19:37.416 Status Code: 0x2 00:19:37.416 Status Code Type: 0x0 00:19:37.416 Do Not Retry: 1 00:19:37.416 Error Location: 0x28 00:19:37.416 LBA: 0x0 00:19:37.416 Namespace: 0x0 00:19:37.416 Vendor Log Page: 0x0 00:19:37.416 00:19:37.416 Number of Queues 00:19:37.416 ================ 00:19:37.416 Number of I/O Submission Queues: 128 00:19:37.416 Number of I/O Completion Queues: 128 00:19:37.416 00:19:37.416 ZNS Specific Controller Data 00:19:37.416 ============================ 00:19:37.416 Zone Append Size Limit: 0 00:19:37.416 00:19:37.416 00:19:37.416 Active Namespaces 00:19:37.416 ================= 00:19:37.416 get_feature(0x05) failed 00:19:37.416 Namespace ID:1 00:19:37.416 Command Set Identifier: NVM (00h) 00:19:37.416 Deallocate: Supported 00:19:37.416 Deallocated/Unwritten Error: Not Supported 00:19:37.416 Deallocated Read Value: Unknown 00:19:37.416 Deallocate in Write Zeroes: Not Supported 00:19:37.416 Deallocated Guard Field: 0xFFFF 00:19:37.416 Flush: Supported 00:19:37.416 Reservation: Not Supported 00:19:37.416 Namespace Sharing Capabilities: Multiple Controllers 00:19:37.416 Size (in LBAs): 1310720 (5GiB) 00:19:37.416 Capacity (in LBAs): 1310720 (5GiB) 00:19:37.416 Utilization (in LBAs): 1310720 (5GiB) 00:19:37.416 UUID: 2ea8bfeb-0c02-48c7-82d8-de74e5e03ccf 00:19:37.416 Thin Provisioning: Not Supported 00:19:37.416 Per-NS Atomic Units: Yes 00:19:37.416 Atomic Boundary Size (Normal): 0 00:19:37.416 Atomic Boundary Size (PFail): 0 00:19:37.416 Atomic Boundary Offset: 0 00:19:37.416 NGUID/EUI64 Never Reused: No 00:19:37.416 ANA group ID: 1 00:19:37.416 Namespace Write Protected: No 00:19:37.416 Number of LBA Formats: 1 00:19:37.416 Current LBA Format: LBA Format #00 00:19:37.416 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:19:37.416 00:19:37.416 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:19:37.416 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:37.416 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:19:37.416 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:37.416 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:19:37.416 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:37.416 02:00:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:37.416 rmmod nvme_tcp 00:19:37.416 rmmod nvme_fabrics 00:19:37.416 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:37.676 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:19:37.676 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:19:37.676 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:19:37.676 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:37.676 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:37.676 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:37.676 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:19:37.676 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:19:37.676 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:37.676 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:37.676 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:37.676 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:37.676 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:37.676 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:37.676 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:37.676 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:37.676 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:37.676 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:37.676 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:37.676 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:37.676 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:37.676 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:37.676 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:37.676 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:37.676 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:37.676 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:37.676 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.676 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:37.676 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:37.935 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:19:37.935 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:19:37.935 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:19:37.935 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:19:37.935 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:37.935 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:37.935 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:37.935 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:37.935 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:19:37.935 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:19:37.935 02:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:38.503 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:38.763 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:38.763 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:38.763 00:19:38.763 real 0m3.216s 00:19:38.763 user 0m1.193s 00:19:38.763 sys 0m1.407s 00:19:38.763 02:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:38.763 ************************************ 00:19:38.763 END TEST nvmf_identify_kernel_target 00:19:38.763 ************************************ 00:19:38.763 02:00:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.764 02:00:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:38.764 02:00:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:38.764 02:00:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:38.764 02:00:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.764 ************************************ 00:19:38.764 START TEST nvmf_auth_host 00:19:38.764 ************************************ 00:19:38.764 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:38.764 * Looking for test storage... 00:19:39.023 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:39.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.023 --rc genhtml_branch_coverage=1 00:19:39.023 --rc genhtml_function_coverage=1 00:19:39.023 --rc genhtml_legend=1 00:19:39.023 --rc geninfo_all_blocks=1 00:19:39.023 --rc geninfo_unexecuted_blocks=1 00:19:39.023 00:19:39.023 ' 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:39.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.023 --rc genhtml_branch_coverage=1 00:19:39.023 --rc genhtml_function_coverage=1 00:19:39.023 --rc genhtml_legend=1 00:19:39.023 --rc geninfo_all_blocks=1 00:19:39.023 --rc geninfo_unexecuted_blocks=1 00:19:39.023 00:19:39.023 ' 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:39.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.023 --rc genhtml_branch_coverage=1 00:19:39.023 --rc genhtml_function_coverage=1 00:19:39.023 --rc genhtml_legend=1 00:19:39.023 --rc geninfo_all_blocks=1 00:19:39.023 --rc geninfo_unexecuted_blocks=1 00:19:39.023 00:19:39.023 ' 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:39.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.023 --rc genhtml_branch_coverage=1 00:19:39.023 --rc genhtml_function_coverage=1 00:19:39.023 --rc genhtml_legend=1 00:19:39.023 --rc geninfo_all_blocks=1 00:19:39.023 --rc geninfo_unexecuted_blocks=1 00:19:39.023 00:19:39.023 ' 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:39.023 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:39.023 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:39.024 Cannot find device "nvmf_init_br" 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:39.024 Cannot find device "nvmf_init_br2" 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:39.024 Cannot find device "nvmf_tgt_br" 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:39.024 Cannot find device "nvmf_tgt_br2" 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:39.024 Cannot find device "nvmf_init_br" 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:39.024 Cannot find device "nvmf_init_br2" 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:39.024 Cannot find device "nvmf_tgt_br" 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:39.024 Cannot find device "nvmf_tgt_br2" 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:39.024 Cannot find device "nvmf_br" 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:19:39.024 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:39.284 Cannot find device "nvmf_init_if" 00:19:39.284 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:19:39.284 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:39.284 Cannot find device "nvmf_init_if2" 00:19:39.284 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:19:39.284 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:39.284 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:39.284 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:19:39.284 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:39.284 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:39.284 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:19:39.284 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:39.284 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:39.284 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:39.284 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:39.284 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:39.284 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:39.284 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:39.284 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:39.284 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:39.284 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:39.284 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:39.284 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:39.284 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:39.284 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:39.284 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:39.284 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:39.284 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:39.284 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:39.284 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:39.284 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:39.284 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:39.284 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:39.284 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:39.284 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:39.284 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:39.284 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:39.543 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:39.543 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:39.543 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:39.543 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:39.543 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:39.543 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:39.543 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:39.543 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:39.543 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:19:39.543 00:19:39.543 --- 10.0.0.3 ping statistics --- 00:19:39.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.543 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:19:39.543 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:39.543 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:39.543 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:19:39.543 00:19:39.543 --- 10.0.0.4 ping statistics --- 00:19:39.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.543 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:19:39.543 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:39.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:39.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:19:39.543 00:19:39.543 --- 10.0.0.1 ping statistics --- 00:19:39.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.543 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:19:39.543 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:39.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:39.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:19:39.543 00:19:39.543 --- 10.0.0.2 ping statistics --- 00:19:39.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.543 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:19:39.543 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:39.543 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:19:39.543 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:39.543 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:39.543 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:39.543 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:39.543 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:39.543 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:39.543 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:39.543 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:19:39.543 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:39.543 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:39.543 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.543 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=92899 00:19:39.543 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:19:39.543 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 92899 00:19:39.543 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 92899 ']' 00:19:39.543 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.543 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:39.543 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.543 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:39.543 02:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.802 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:39.802 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:19:39.802 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:39.802 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:39.802 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.802 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:39.802 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:19:39.802 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:19:39.802 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:39.802 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:39.802 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:39.802 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:39.802 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:39.802 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:39.802 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4afba3f7c550bf354834a3580b5d1340 00:19:39.802 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:39.802 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.XHT 00:19:39.802 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4afba3f7c550bf354834a3580b5d1340 0 00:19:39.802 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4afba3f7c550bf354834a3580b5d1340 0 00:19:39.802 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:39.802 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:39.802 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4afba3f7c550bf354834a3580b5d1340 00:19:39.802 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:39.802 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:40.061 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.XHT 00:19:40.061 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.XHT 00:19:40.061 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.XHT 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3fff76aeceb4629131f38d35ba55946cc21820b9d2f329a0f794c3253dafb6a9 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.5Sl 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3fff76aeceb4629131f38d35ba55946cc21820b9d2f329a0f794c3253dafb6a9 3 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3fff76aeceb4629131f38d35ba55946cc21820b9d2f329a0f794c3253dafb6a9 3 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3fff76aeceb4629131f38d35ba55946cc21820b9d2f329a0f794c3253dafb6a9 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.5Sl 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.5Sl 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.5Sl 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=885003fcf49564d2b3fd8e86eb38008273b1ed22ad8bcd68 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.L21 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 885003fcf49564d2b3fd8e86eb38008273b1ed22ad8bcd68 0 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 885003fcf49564d2b3fd8e86eb38008273b1ed22ad8bcd68 0 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=885003fcf49564d2b3fd8e86eb38008273b1ed22ad8bcd68 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.L21 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.L21 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.L21 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f895f0ae150d84a48cf420841d6e22243246206bd8dd4f19 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.SLb 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f895f0ae150d84a48cf420841d6e22243246206bd8dd4f19 2 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f895f0ae150d84a48cf420841d6e22243246206bd8dd4f19 2 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f895f0ae150d84a48cf420841d6e22243246206bd8dd4f19 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.SLb 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.SLb 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.SLb 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f18fdb03e6881d38a76ec74cb3442022 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.l5d 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f18fdb03e6881d38a76ec74cb3442022 1 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f18fdb03e6881d38a76ec74cb3442022 1 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f18fdb03e6881d38a76ec74cb3442022 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:19:40.062 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.l5d 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.l5d 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.l5d 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a923499f8bd973d1e72497afd70704a8 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.bCF 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a923499f8bd973d1e72497afd70704a8 1 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a923499f8bd973d1e72497afd70704a8 1 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a923499f8bd973d1e72497afd70704a8 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.bCF 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.bCF 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.bCF 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b1137b5667760c3efe5b4b21cc2cc8ea43735c5c099f6fb8 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Pmi 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b1137b5667760c3efe5b4b21cc2cc8ea43735c5c099f6fb8 2 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b1137b5667760c3efe5b4b21cc2cc8ea43735c5c099f6fb8 2 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b1137b5667760c3efe5b4b21cc2cc8ea43735c5c099f6fb8 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Pmi 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Pmi 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Pmi 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3767b7716650b89e1ee86eace9d31ec2 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.P7K 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3767b7716650b89e1ee86eace9d31ec2 0 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3767b7716650b89e1ee86eace9d31ec2 0 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3767b7716650b89e1ee86eace9d31ec2 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.P7K 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.P7K 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.P7K 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:19:40.322 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:19:40.323 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:40.323 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7f767cc6ab0495b563033b418808c790646498fd86e567ae772664c60c7bab72 00:19:40.323 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:40.323 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Gs0 00:19:40.323 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7f767cc6ab0495b563033b418808c790646498fd86e567ae772664c60c7bab72 3 00:19:40.323 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7f767cc6ab0495b563033b418808c790646498fd86e567ae772664c60c7bab72 3 00:19:40.323 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:40.323 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:40.323 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7f767cc6ab0495b563033b418808c790646498fd86e567ae772664c60c7bab72 00:19:40.323 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:19:40.323 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:40.582 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Gs0 00:19:40.582 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Gs0 00:19:40.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.582 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Gs0 00:19:40.582 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:19:40.582 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 92899 00:19:40.582 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 92899 ']' 00:19:40.582 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.582 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:40.582 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.582 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:40.582 02:00:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.XHT 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.5Sl ]] 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5Sl 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.L21 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.SLb ]] 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.SLb 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.l5d 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.bCF ]] 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.bCF 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Pmi 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.P7K ]] 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.P7K 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Gs0 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:40.841 02:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:41.409 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:41.409 Waiting for block devices as requested 00:19:41.409 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:41.409 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:41.976 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:41.976 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:41.976 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:19:41.976 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:41.976 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:41.976 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:41.976 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:19:41.976 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:19:41.976 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:41.976 No valid GPT data, bailing 00:19:41.976 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:41.976 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:41.976 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:41.976 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:19:41.976 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:41.976 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:41.976 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:19:41.976 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:19:41.976 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:41.976 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:41.976 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:19:41.976 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:19:41.976 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:41.976 No valid GPT data, bailing 00:19:41.976 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:42.235 No valid GPT data, bailing 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:42.235 No valid GPT data, bailing 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid=7cdc77f7-6c10-48d3-83fa-703a290bdf89 -a 10.0.0.1 -t tcp -s 4420 00:19:42.235 00:19:42.235 Discovery Log Number of Records 2, Generation counter 2 00:19:42.235 =====Discovery Log Entry 0====== 00:19:42.235 trtype: tcp 00:19:42.235 adrfam: ipv4 00:19:42.235 subtype: current discovery subsystem 00:19:42.235 treq: not specified, sq flow control disable supported 00:19:42.235 portid: 1 00:19:42.235 trsvcid: 4420 00:19:42.235 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:42.235 traddr: 10.0.0.1 00:19:42.235 eflags: none 00:19:42.235 sectype: none 00:19:42.235 =====Discovery Log Entry 1====== 00:19:42.235 trtype: tcp 00:19:42.235 adrfam: ipv4 00:19:42.235 subtype: nvme subsystem 00:19:42.235 treq: not specified, sq flow control disable supported 00:19:42.235 portid: 1 00:19:42.235 trsvcid: 4420 00:19:42.235 subnqn: nqn.2024-02.io.spdk:cnode0 00:19:42.235 traddr: 10.0.0.1 00:19:42.235 eflags: none 00:19:42.235 sectype: none 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1MDAzZmNmNDk1NjRkMmIzZmQ4ZTg2ZWIzODAwODI3M2IxZWQyMmFkOGJjZDY4y2wPdQ==: 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:42.235 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:42.494 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1MDAzZmNmNDk1NjRkMmIzZmQ4ZTg2ZWIzODAwODI3M2IxZWQyMmFkOGJjZDY4y2wPdQ==: 00:19:42.494 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: ]] 00:19:42.494 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: 00:19:42.494 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:42.494 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:19:42.494 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:42.494 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:42.494 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:19:42.494 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:42.494 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:19:42.494 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:42.494 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:42.494 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:42.494 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:42.494 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.494 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.494 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.494 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:42.494 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:42.494 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:42.494 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:42.494 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.494 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.494 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:42.494 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:42.494 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:42.494 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:42.494 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:42.494 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.494 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.494 02:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.494 nvme0n1 00:19:42.494 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.494 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:42.494 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:42.494 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.494 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.494 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.494 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.494 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:42.494 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.494 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGFmYmEzZjdjNTUwYmYzNTQ4MzRhMzU4MGI1ZDEzNDA+xYmS: 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGFmYmEzZjdjNTUwYmYzNTQ4MzRhMzU4MGI1ZDEzNDA+xYmS: 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: ]] 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.753 nvme0n1 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1MDAzZmNmNDk1NjRkMmIzZmQ4ZTg2ZWIzODAwODI3M2IxZWQyMmFkOGJjZDY4y2wPdQ==: 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1MDAzZmNmNDk1NjRkMmIzZmQ4ZTg2ZWIzODAwODI3M2IxZWQyMmFkOGJjZDY4y2wPdQ==: 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: ]] 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.753 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:42.754 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:42.754 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:42.754 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:42.754 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.754 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.754 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:42.754 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:42.754 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:42.754 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:42.754 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:42.754 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.754 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.754 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.012 nvme0n1 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjE4ZmRiMDNlNjg4MWQzOGE3NmVjNzRjYjM0NDIwMjKSb5ej: 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjE4ZmRiMDNlNjg4MWQzOGE3NmVjNzRjYjM0NDIwMjKSb5ej: 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: ]] 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.012 nvme0n1 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.012 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.013 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:43.013 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.013 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.013 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.013 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.013 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:43.013 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.013 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjExMzdiNTY2Nzc2MGMzZWZlNWI0YjIxY2MyY2M4ZWE0MzczNWM1YzA5OWY2ZmI4EtsZmg==: 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjExMzdiNTY2Nzc2MGMzZWZlNWI0YjIxY2MyY2M4ZWE0MzczNWM1YzA5OWY2ZmI4EtsZmg==: 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: ]] 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.272 nvme0n1 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2Y3NjdjYzZhYjA0OTViNTYzMDMzYjQxODgwOGM3OTA2NDY0OThmZDg2ZTU2N2FlNzcyNjY0YzYwYzdiYWI3Mjg1cH8=: 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2Y3NjdjYzZhYjA0OTViNTYzMDMzYjQxODgwOGM3OTA2NDY0OThmZDg2ZTU2N2FlNzcyNjY0YzYwYzdiYWI3Mjg1cH8=: 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.272 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.531 nvme0n1 00:19:43.531 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.531 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.531 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.531 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.531 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:43.531 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.531 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.531 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:43.531 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.531 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.531 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.531 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:43.531 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:43.531 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:19:43.531 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:43.531 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:43.531 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:43.531 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:43.531 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGFmYmEzZjdjNTUwYmYzNTQ4MzRhMzU4MGI1ZDEzNDA+xYmS: 00:19:43.531 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: 00:19:43.531 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:43.531 02:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:43.790 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGFmYmEzZjdjNTUwYmYzNTQ4MzRhMzU4MGI1ZDEzNDA+xYmS: 00:19:43.790 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: ]] 00:19:43.790 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: 00:19:43.790 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:19:43.790 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:43.790 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:43.790 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:43.790 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:43.790 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:43.790 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:43.790 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.790 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.790 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.790 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:43.790 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:43.790 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:43.790 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:43.790 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.790 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.790 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:43.790 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:43.790 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:43.790 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:43.790 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:43.790 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.790 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.790 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.049 nvme0n1 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1MDAzZmNmNDk1NjRkMmIzZmQ4ZTg2ZWIzODAwODI3M2IxZWQyMmFkOGJjZDY4y2wPdQ==: 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1MDAzZmNmNDk1NjRkMmIzZmQ4ZTg2ZWIzODAwODI3M2IxZWQyMmFkOGJjZDY4y2wPdQ==: 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: ]] 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.049 nvme0n1 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.049 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjE4ZmRiMDNlNjg4MWQzOGE3NmVjNzRjYjM0NDIwMjKSb5ej: 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjE4ZmRiMDNlNjg4MWQzOGE3NmVjNzRjYjM0NDIwMjKSb5ej: 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: ]] 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.309 nvme0n1 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:44.309 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:19:44.310 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:44.310 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:44.310 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:44.310 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:44.310 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjExMzdiNTY2Nzc2MGMzZWZlNWI0YjIxY2MyY2M4ZWE0MzczNWM1YzA5OWY2ZmI4EtsZmg==: 00:19:44.310 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: 00:19:44.310 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:44.310 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:44.310 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjExMzdiNTY2Nzc2MGMzZWZlNWI0YjIxY2MyY2M4ZWE0MzczNWM1YzA5OWY2ZmI4EtsZmg==: 00:19:44.310 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: ]] 00:19:44.310 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: 00:19:44.310 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:19:44.310 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:44.310 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:44.310 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:44.310 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:44.310 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:44.310 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:44.310 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.310 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.310 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.310 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:44.310 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:44.310 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:44.310 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:44.310 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:44.310 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:44.310 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:44.310 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:44.310 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:44.310 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:44.310 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:44.310 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:44.310 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.310 02:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.569 nvme0n1 00:19:44.569 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2Y3NjdjYzZhYjA0OTViNTYzMDMzYjQxODgwOGM3OTA2NDY0OThmZDg2ZTU2N2FlNzcyNjY0YzYwYzdiYWI3Mjg1cH8=: 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2Y3NjdjYzZhYjA0OTViNTYzMDMzYjQxODgwOGM3OTA2NDY0OThmZDg2ZTU2N2FlNzcyNjY0YzYwYzdiYWI3Mjg1cH8=: 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.570 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.830 nvme0n1 00:19:44.830 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.830 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.830 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:44.830 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.830 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.830 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.830 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.830 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:44.830 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.830 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.830 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.830 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:44.830 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:44.830 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:19:44.830 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:44.830 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:44.830 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:44.830 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:44.830 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGFmYmEzZjdjNTUwYmYzNTQ4MzRhMzU4MGI1ZDEzNDA+xYmS: 00:19:44.830 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: 00:19:44.830 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:44.830 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:45.398 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGFmYmEzZjdjNTUwYmYzNTQ4MzRhMzU4MGI1ZDEzNDA+xYmS: 00:19:45.398 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: ]] 00:19:45.398 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: 00:19:45.398 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:19:45.398 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:45.398 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:45.398 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:45.398 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:45.398 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:45.398 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:45.398 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.398 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.398 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.398 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:45.398 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:45.398 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:45.398 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:45.398 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:45.398 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:45.398 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:45.398 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:45.398 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:45.398 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:45.398 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:45.398 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.398 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.398 02:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.657 nvme0n1 00:19:45.657 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.657 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:45.657 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:45.657 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.657 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.657 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.657 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.657 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:45.657 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.657 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.657 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.657 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:45.657 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:19:45.657 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:45.657 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:45.657 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:45.657 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:45.657 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1MDAzZmNmNDk1NjRkMmIzZmQ4ZTg2ZWIzODAwODI3M2IxZWQyMmFkOGJjZDY4y2wPdQ==: 00:19:45.657 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: 00:19:45.657 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:45.657 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:45.657 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1MDAzZmNmNDk1NjRkMmIzZmQ4ZTg2ZWIzODAwODI3M2IxZWQyMmFkOGJjZDY4y2wPdQ==: 00:19:45.657 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: ]] 00:19:45.657 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: 00:19:45.658 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:19:45.658 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:45.658 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:45.658 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:45.658 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:45.658 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:45.658 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:45.658 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.658 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.658 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.658 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:45.658 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:45.658 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:45.658 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:45.658 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:45.658 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:45.658 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:45.658 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:45.658 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:45.658 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:45.658 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:45.658 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.658 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.658 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.917 nvme0n1 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjE4ZmRiMDNlNjg4MWQzOGE3NmVjNzRjYjM0NDIwMjKSb5ej: 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjE4ZmRiMDNlNjg4MWQzOGE3NmVjNzRjYjM0NDIwMjKSb5ej: 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: ]] 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.917 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.177 nvme0n1 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjExMzdiNTY2Nzc2MGMzZWZlNWI0YjIxY2MyY2M4ZWE0MzczNWM1YzA5OWY2ZmI4EtsZmg==: 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjExMzdiNTY2Nzc2MGMzZWZlNWI0YjIxY2MyY2M4ZWE0MzczNWM1YzA5OWY2ZmI4EtsZmg==: 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: ]] 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.177 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.437 nvme0n1 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2Y3NjdjYzZhYjA0OTViNTYzMDMzYjQxODgwOGM3OTA2NDY0OThmZDg2ZTU2N2FlNzcyNjY0YzYwYzdiYWI3Mjg1cH8=: 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2Y3NjdjYzZhYjA0OTViNTYzMDMzYjQxODgwOGM3OTA2NDY0OThmZDg2ZTU2N2FlNzcyNjY0YzYwYzdiYWI3Mjg1cH8=: 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.437 02:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.437 nvme0n1 00:19:46.437 02:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.697 02:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:46.697 02:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.697 02:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:46.697 02:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.697 02:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.697 02:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.697 02:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:46.697 02:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.697 02:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.697 02:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.697 02:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:46.697 02:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:46.697 02:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:19:46.697 02:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:46.697 02:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:46.697 02:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:46.697 02:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:46.697 02:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGFmYmEzZjdjNTUwYmYzNTQ4MzRhMzU4MGI1ZDEzNDA+xYmS: 00:19:46.697 02:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: 00:19:46.697 02:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:46.697 02:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:48.074 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGFmYmEzZjdjNTUwYmYzNTQ4MzRhMzU4MGI1ZDEzNDA+xYmS: 00:19:48.074 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: ]] 00:19:48.074 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: 00:19:48.074 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:19:48.074 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:48.075 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:48.075 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:48.075 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:48.075 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:48.075 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:48.075 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.075 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.075 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.075 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:48.075 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:48.075 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:48.075 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:48.075 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.075 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.075 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:48.075 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:48.075 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:48.075 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:48.075 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:48.075 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.075 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.075 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.333 nvme0n1 00:19:48.333 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.333 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:48.333 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:48.333 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.333 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.333 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.333 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.333 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:48.333 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.333 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.592 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.592 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:48.592 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:19:48.592 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:48.592 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:48.592 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:48.592 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:48.592 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1MDAzZmNmNDk1NjRkMmIzZmQ4ZTg2ZWIzODAwODI3M2IxZWQyMmFkOGJjZDY4y2wPdQ==: 00:19:48.592 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: 00:19:48.592 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:48.592 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:48.592 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1MDAzZmNmNDk1NjRkMmIzZmQ4ZTg2ZWIzODAwODI3M2IxZWQyMmFkOGJjZDY4y2wPdQ==: 00:19:48.592 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: ]] 00:19:48.592 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: 00:19:48.592 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:19:48.592 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:48.592 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:48.592 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:48.592 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:48.592 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:48.592 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:48.592 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.592 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.592 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.592 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:48.592 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:48.592 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:48.593 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:48.593 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.593 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.593 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:48.593 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:48.593 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:48.593 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:48.593 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:48.593 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.593 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.593 02:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.853 nvme0n1 00:19:48.853 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.853 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:48.853 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.853 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:48.853 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.853 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.853 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.853 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:48.853 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.853 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.853 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.853 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:48.853 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:19:48.853 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:48.853 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:48.853 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:48.853 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:48.854 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjE4ZmRiMDNlNjg4MWQzOGE3NmVjNzRjYjM0NDIwMjKSb5ej: 00:19:48.854 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: 00:19:48.854 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:48.854 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:48.854 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjE4ZmRiMDNlNjg4MWQzOGE3NmVjNzRjYjM0NDIwMjKSb5ej: 00:19:48.854 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: ]] 00:19:48.854 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: 00:19:48.854 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:19:48.854 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:48.854 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:48.854 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:48.854 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:48.854 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:48.854 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:48.854 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.854 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.854 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.854 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:48.854 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:48.854 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:48.854 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:48.854 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.854 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.854 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:48.854 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:48.854 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:48.854 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:48.854 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:48.854 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.854 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.854 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.155 nvme0n1 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjExMzdiNTY2Nzc2MGMzZWZlNWI0YjIxY2MyY2M4ZWE0MzczNWM1YzA5OWY2ZmI4EtsZmg==: 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjExMzdiNTY2Nzc2MGMzZWZlNWI0YjIxY2MyY2M4ZWE0MzczNWM1YzA5OWY2ZmI4EtsZmg==: 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: ]] 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.155 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:49.156 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.156 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:49.156 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:49.156 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:49.156 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:49.156 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.156 02:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.421 nvme0n1 00:19:49.421 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.421 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.422 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.422 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.422 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.422 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.681 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.681 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.681 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.681 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.681 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.681 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:49.681 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:19:49.681 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:49.681 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:49.681 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:49.681 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:49.681 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2Y3NjdjYzZhYjA0OTViNTYzMDMzYjQxODgwOGM3OTA2NDY0OThmZDg2ZTU2N2FlNzcyNjY0YzYwYzdiYWI3Mjg1cH8=: 00:19:49.681 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:49.681 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:49.681 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:49.681 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2Y3NjdjYzZhYjA0OTViNTYzMDMzYjQxODgwOGM3OTA2NDY0OThmZDg2ZTU2N2FlNzcyNjY0YzYwYzdiYWI3Mjg1cH8=: 00:19:49.681 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:49.681 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:19:49.681 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:49.682 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:49.682 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:49.682 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:49.682 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:49.682 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:49.682 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.682 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.682 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.682 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:49.682 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:49.682 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:49.682 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:49.682 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.682 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.682 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:49.682 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.682 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:49.682 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:49.682 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:49.682 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:49.682 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.682 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.941 nvme0n1 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGFmYmEzZjdjNTUwYmYzNTQ4MzRhMzU4MGI1ZDEzNDA+xYmS: 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGFmYmEzZjdjNTUwYmYzNTQ4MzRhMzU4MGI1ZDEzNDA+xYmS: 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: ]] 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.941 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.509 nvme0n1 00:19:50.509 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.509 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:50.509 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:50.509 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.509 02:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1MDAzZmNmNDk1NjRkMmIzZmQ4ZTg2ZWIzODAwODI3M2IxZWQyMmFkOGJjZDY4y2wPdQ==: 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1MDAzZmNmNDk1NjRkMmIzZmQ4ZTg2ZWIzODAwODI3M2IxZWQyMmFkOGJjZDY4y2wPdQ==: 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: ]] 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.509 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.077 nvme0n1 00:19:51.077 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.077 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:51.077 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.077 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.077 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:51.077 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.077 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.077 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:51.077 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.077 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.077 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.077 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:51.077 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:19:51.077 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.077 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:51.077 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:51.077 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:51.078 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjE4ZmRiMDNlNjg4MWQzOGE3NmVjNzRjYjM0NDIwMjKSb5ej: 00:19:51.078 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: 00:19:51.078 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:51.078 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:51.078 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjE4ZmRiMDNlNjg4MWQzOGE3NmVjNzRjYjM0NDIwMjKSb5ej: 00:19:51.078 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: ]] 00:19:51.078 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: 00:19:51.078 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:19:51.078 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:51.078 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:51.078 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:51.078 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:51.078 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:51.078 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:51.078 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.078 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.078 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.078 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:51.078 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:51.078 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:51.078 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:51.078 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:51.078 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:51.078 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:51.078 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:51.078 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:51.078 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:51.078 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:51.078 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.078 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.078 02:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.645 nvme0n1 00:19:51.645 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.645 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:51.645 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjExMzdiNTY2Nzc2MGMzZWZlNWI0YjIxY2MyY2M4ZWE0MzczNWM1YzA5OWY2ZmI4EtsZmg==: 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjExMzdiNTY2Nzc2MGMzZWZlNWI0YjIxY2MyY2M4ZWE0MzczNWM1YzA5OWY2ZmI4EtsZmg==: 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: ]] 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:51.646 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:51.905 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:51.905 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.905 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.163 nvme0n1 00:19:52.163 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.163 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.163 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.163 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.163 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.163 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2Y3NjdjYzZhYjA0OTViNTYzMDMzYjQxODgwOGM3OTA2NDY0OThmZDg2ZTU2N2FlNzcyNjY0YzYwYzdiYWI3Mjg1cH8=: 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2Y3NjdjYzZhYjA0OTViNTYzMDMzYjQxODgwOGM3OTA2NDY0OThmZDg2ZTU2N2FlNzcyNjY0YzYwYzdiYWI3Mjg1cH8=: 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.422 02:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.987 nvme0n1 00:19:52.987 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGFmYmEzZjdjNTUwYmYzNTQ4MzRhMzU4MGI1ZDEzNDA+xYmS: 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGFmYmEzZjdjNTUwYmYzNTQ4MzRhMzU4MGI1ZDEzNDA+xYmS: 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: ]] 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.988 nvme0n1 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.988 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1MDAzZmNmNDk1NjRkMmIzZmQ4ZTg2ZWIzODAwODI3M2IxZWQyMmFkOGJjZDY4y2wPdQ==: 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1MDAzZmNmNDk1NjRkMmIzZmQ4ZTg2ZWIzODAwODI3M2IxZWQyMmFkOGJjZDY4y2wPdQ==: 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: ]] 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.246 nvme0n1 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.246 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjE4ZmRiMDNlNjg4MWQzOGE3NmVjNzRjYjM0NDIwMjKSb5ej: 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjE4ZmRiMDNlNjg4MWQzOGE3NmVjNzRjYjM0NDIwMjKSb5ej: 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: ]] 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.247 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.506 nvme0n1 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjExMzdiNTY2Nzc2MGMzZWZlNWI0YjIxY2MyY2M4ZWE0MzczNWM1YzA5OWY2ZmI4EtsZmg==: 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjExMzdiNTY2Nzc2MGMzZWZlNWI0YjIxY2MyY2M4ZWE0MzczNWM1YzA5OWY2ZmI4EtsZmg==: 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: ]] 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.506 02:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:53.506 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:53.506 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:53.506 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:53.506 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.506 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.506 nvme0n1 00:19:53.506 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.506 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.506 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.506 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.506 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.506 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.765 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.765 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2Y3NjdjYzZhYjA0OTViNTYzMDMzYjQxODgwOGM3OTA2NDY0OThmZDg2ZTU2N2FlNzcyNjY0YzYwYzdiYWI3Mjg1cH8=: 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2Y3NjdjYzZhYjA0OTViNTYzMDMzYjQxODgwOGM3OTA2NDY0OThmZDg2ZTU2N2FlNzcyNjY0YzYwYzdiYWI3Mjg1cH8=: 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.766 nvme0n1 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGFmYmEzZjdjNTUwYmYzNTQ4MzRhMzU4MGI1ZDEzNDA+xYmS: 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGFmYmEzZjdjNTUwYmYzNTQ4MzRhMzU4MGI1ZDEzNDA+xYmS: 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: ]] 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.766 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.026 nvme0n1 00:19:54.026 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.026 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.026 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.026 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.026 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.026 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.026 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.026 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.026 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.026 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.026 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.026 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.026 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:19:54.026 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.026 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:54.026 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:54.026 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:54.026 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1MDAzZmNmNDk1NjRkMmIzZmQ4ZTg2ZWIzODAwODI3M2IxZWQyMmFkOGJjZDY4y2wPdQ==: 00:19:54.026 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: 00:19:54.026 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:54.026 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:54.026 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1MDAzZmNmNDk1NjRkMmIzZmQ4ZTg2ZWIzODAwODI3M2IxZWQyMmFkOGJjZDY4y2wPdQ==: 00:19:54.026 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: ]] 00:19:54.026 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: 00:19:54.026 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:19:54.026 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.026 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:54.026 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:54.026 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:54.026 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.026 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:54.026 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.026 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.026 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.026 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.027 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:54.027 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:54.027 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:54.027 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.027 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.027 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:54.027 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.027 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:54.027 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:54.027 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:54.027 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.027 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.027 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.286 nvme0n1 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjE4ZmRiMDNlNjg4MWQzOGE3NmVjNzRjYjM0NDIwMjKSb5ej: 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjE4ZmRiMDNlNjg4MWQzOGE3NmVjNzRjYjM0NDIwMjKSb5ej: 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: ]] 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.286 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.546 nvme0n1 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjExMzdiNTY2Nzc2MGMzZWZlNWI0YjIxY2MyY2M4ZWE0MzczNWM1YzA5OWY2ZmI4EtsZmg==: 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjExMzdiNTY2Nzc2MGMzZWZlNWI0YjIxY2MyY2M4ZWE0MzczNWM1YzA5OWY2ZmI4EtsZmg==: 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: ]] 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.546 02:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.546 nvme0n1 00:19:54.546 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.546 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.546 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.546 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.546 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.546 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.546 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.546 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.546 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.546 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2Y3NjdjYzZhYjA0OTViNTYzMDMzYjQxODgwOGM3OTA2NDY0OThmZDg2ZTU2N2FlNzcyNjY0YzYwYzdiYWI3Mjg1cH8=: 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2Y3NjdjYzZhYjA0OTViNTYzMDMzYjQxODgwOGM3OTA2NDY0OThmZDg2ZTU2N2FlNzcyNjY0YzYwYzdiYWI3Mjg1cH8=: 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.806 nvme0n1 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:54.806 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGFmYmEzZjdjNTUwYmYzNTQ4MzRhMzU4MGI1ZDEzNDA+xYmS: 00:19:54.807 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: 00:19:54.807 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:54.807 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:54.807 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGFmYmEzZjdjNTUwYmYzNTQ4MzRhMzU4MGI1ZDEzNDA+xYmS: 00:19:54.807 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: ]] 00:19:54.807 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: 00:19:54.807 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:19:54.807 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.807 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:54.807 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:54.807 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:54.807 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.807 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:54.807 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.807 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.807 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.807 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.807 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:54.807 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:54.807 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:54.807 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.807 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.807 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:54.807 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.807 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:54.807 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:54.807 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:54.807 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.807 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.807 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.066 nvme0n1 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1MDAzZmNmNDk1NjRkMmIzZmQ4ZTg2ZWIzODAwODI3M2IxZWQyMmFkOGJjZDY4y2wPdQ==: 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1MDAzZmNmNDk1NjRkMmIzZmQ4ZTg2ZWIzODAwODI3M2IxZWQyMmFkOGJjZDY4y2wPdQ==: 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: ]] 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.066 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.325 nvme0n1 00:19:55.325 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.325 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.325 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.325 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.325 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.325 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.325 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.325 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjE4ZmRiMDNlNjg4MWQzOGE3NmVjNzRjYjM0NDIwMjKSb5ej: 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjE4ZmRiMDNlNjg4MWQzOGE3NmVjNzRjYjM0NDIwMjKSb5ej: 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: ]] 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.326 02:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.585 nvme0n1 00:19:55.585 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.585 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.585 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.585 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.585 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.585 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.585 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.585 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.585 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.585 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.585 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.585 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.585 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:19:55.585 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.585 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:55.585 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:55.585 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:55.585 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjExMzdiNTY2Nzc2MGMzZWZlNWI0YjIxY2MyY2M4ZWE0MzczNWM1YzA5OWY2ZmI4EtsZmg==: 00:19:55.585 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: 00:19:55.586 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:55.586 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:55.586 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjExMzdiNTY2Nzc2MGMzZWZlNWI0YjIxY2MyY2M4ZWE0MzczNWM1YzA5OWY2ZmI4EtsZmg==: 00:19:55.586 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: ]] 00:19:55.586 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: 00:19:55.586 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:19:55.586 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.586 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:55.586 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:55.586 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:55.586 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.586 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:55.586 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.586 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.586 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.586 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.586 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:55.586 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:55.586 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:55.586 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.586 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.586 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:55.586 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.586 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:55.586 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:55.586 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:55.586 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:55.586 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.586 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.845 nvme0n1 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2Y3NjdjYzZhYjA0OTViNTYzMDMzYjQxODgwOGM3OTA2NDY0OThmZDg2ZTU2N2FlNzcyNjY0YzYwYzdiYWI3Mjg1cH8=: 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2Y3NjdjYzZhYjA0OTViNTYzMDMzYjQxODgwOGM3OTA2NDY0OThmZDg2ZTU2N2FlNzcyNjY0YzYwYzdiYWI3Mjg1cH8=: 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:55.845 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:55.846 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:55.846 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.846 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.105 nvme0n1 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGFmYmEzZjdjNTUwYmYzNTQ4MzRhMzU4MGI1ZDEzNDA+xYmS: 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGFmYmEzZjdjNTUwYmYzNTQ4MzRhMzU4MGI1ZDEzNDA+xYmS: 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: ]] 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.105 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.674 nvme0n1 00:19:56.674 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.674 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.674 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:56.674 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.674 02:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1MDAzZmNmNDk1NjRkMmIzZmQ4ZTg2ZWIzODAwODI3M2IxZWQyMmFkOGJjZDY4y2wPdQ==: 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1MDAzZmNmNDk1NjRkMmIzZmQ4ZTg2ZWIzODAwODI3M2IxZWQyMmFkOGJjZDY4y2wPdQ==: 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: ]] 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.674 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.933 nvme0n1 00:19:56.933 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.933 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.933 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.933 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.933 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:56.933 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.933 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.933 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.933 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.933 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.933 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.933 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:56.933 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:19:56.933 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:56.933 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:56.933 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:56.933 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:56.933 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjE4ZmRiMDNlNjg4MWQzOGE3NmVjNzRjYjM0NDIwMjKSb5ej: 00:19:56.933 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: 00:19:56.933 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:56.933 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:56.933 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjE4ZmRiMDNlNjg4MWQzOGE3NmVjNzRjYjM0NDIwMjKSb5ej: 00:19:56.933 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: ]] 00:19:56.933 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: 00:19:56.933 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:19:56.933 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:56.933 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:56.933 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:56.933 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:56.933 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:56.933 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:56.933 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.934 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.934 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.934 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:56.934 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:56.934 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:56.934 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:56.934 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.934 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.934 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:56.934 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:56.934 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:56.934 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:56.934 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:56.934 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.934 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.934 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.191 nvme0n1 00:19:57.191 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.191 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.191 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.191 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.191 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:57.191 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.449 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.449 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.449 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.449 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.449 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.449 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:57.449 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:19:57.449 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:57.450 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:57.450 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:57.450 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:57.450 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjExMzdiNTY2Nzc2MGMzZWZlNWI0YjIxY2MyY2M4ZWE0MzczNWM1YzA5OWY2ZmI4EtsZmg==: 00:19:57.450 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: 00:19:57.450 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:57.450 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:57.450 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjExMzdiNTY2Nzc2MGMzZWZlNWI0YjIxY2MyY2M4ZWE0MzczNWM1YzA5OWY2ZmI4EtsZmg==: 00:19:57.450 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: ]] 00:19:57.450 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: 00:19:57.450 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:19:57.450 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:57.450 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:57.450 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:57.450 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:57.450 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:57.450 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:57.450 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.450 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.450 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.450 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:57.450 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:57.450 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:57.450 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:57.450 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.450 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.450 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:57.450 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:57.450 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:57.450 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:57.450 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:57.450 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:57.450 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.450 02:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.708 nvme0n1 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2Y3NjdjYzZhYjA0OTViNTYzMDMzYjQxODgwOGM3OTA2NDY0OThmZDg2ZTU2N2FlNzcyNjY0YzYwYzdiYWI3Mjg1cH8=: 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2Y3NjdjYzZhYjA0OTViNTYzMDMzYjQxODgwOGM3OTA2NDY0OThmZDg2ZTU2N2FlNzcyNjY0YzYwYzdiYWI3Mjg1cH8=: 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.708 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.966 nvme0n1 00:19:57.966 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.966 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.966 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:57.966 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.966 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.966 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.966 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.966 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.966 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.966 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGFmYmEzZjdjNTUwYmYzNTQ4MzRhMzU4MGI1ZDEzNDA+xYmS: 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGFmYmEzZjdjNTUwYmYzNTQ4MzRhMzU4MGI1ZDEzNDA+xYmS: 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: ]] 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.225 02:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.797 nvme0n1 00:19:58.797 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.797 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.797 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.797 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.797 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.797 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.797 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.797 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.797 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.797 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.797 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.797 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.798 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:19:58.798 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.798 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:58.798 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:58.798 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:58.798 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1MDAzZmNmNDk1NjRkMmIzZmQ4ZTg2ZWIzODAwODI3M2IxZWQyMmFkOGJjZDY4y2wPdQ==: 00:19:58.798 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: 00:19:58.798 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:58.798 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:58.798 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1MDAzZmNmNDk1NjRkMmIzZmQ4ZTg2ZWIzODAwODI3M2IxZWQyMmFkOGJjZDY4y2wPdQ==: 00:19:58.798 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: ]] 00:19:58.798 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: 00:19:58.798 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:19:58.798 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.798 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:58.798 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:58.798 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:58.798 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.798 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:58.798 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.798 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.798 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.799 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.799 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:58.799 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:58.799 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:58.799 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.799 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.799 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:58.799 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.799 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:58.799 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:58.799 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:58.799 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.799 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.799 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.370 nvme0n1 00:19:59.370 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.370 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.370 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.370 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.370 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.370 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.370 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.370 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.370 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.370 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.370 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.370 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.370 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:19:59.370 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.370 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:59.370 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:59.370 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:59.370 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjE4ZmRiMDNlNjg4MWQzOGE3NmVjNzRjYjM0NDIwMjKSb5ej: 00:19:59.370 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: 00:19:59.370 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:59.370 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:59.370 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjE4ZmRiMDNlNjg4MWQzOGE3NmVjNzRjYjM0NDIwMjKSb5ej: 00:19:59.370 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: ]] 00:19:59.370 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: 00:19:59.370 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:19:59.370 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.370 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:59.370 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:59.371 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:59.371 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.371 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:59.371 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.371 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.371 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.371 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.371 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:59.371 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:59.371 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:59.371 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.371 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.371 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:59.371 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.371 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:59.371 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:59.371 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:59.371 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.371 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.371 02:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.938 nvme0n1 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjExMzdiNTY2Nzc2MGMzZWZlNWI0YjIxY2MyY2M4ZWE0MzczNWM1YzA5OWY2ZmI4EtsZmg==: 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjExMzdiNTY2Nzc2MGMzZWZlNWI0YjIxY2MyY2M4ZWE0MzczNWM1YzA5OWY2ZmI4EtsZmg==: 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: ]] 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.938 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.506 nvme0n1 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2Y3NjdjYzZhYjA0OTViNTYzMDMzYjQxODgwOGM3OTA2NDY0OThmZDg2ZTU2N2FlNzcyNjY0YzYwYzdiYWI3Mjg1cH8=: 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2Y3NjdjYzZhYjA0OTViNTYzMDMzYjQxODgwOGM3OTA2NDY0OThmZDg2ZTU2N2FlNzcyNjY0YzYwYzdiYWI3Mjg1cH8=: 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.506 02:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.075 nvme0n1 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGFmYmEzZjdjNTUwYmYzNTQ4MzRhMzU4MGI1ZDEzNDA+xYmS: 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGFmYmEzZjdjNTUwYmYzNTQ4MzRhMzU4MGI1ZDEzNDA+xYmS: 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: ]] 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.075 nvme0n1 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.075 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1MDAzZmNmNDk1NjRkMmIzZmQ4ZTg2ZWIzODAwODI3M2IxZWQyMmFkOGJjZDY4y2wPdQ==: 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1MDAzZmNmNDk1NjRkMmIzZmQ4ZTg2ZWIzODAwODI3M2IxZWQyMmFkOGJjZDY4y2wPdQ==: 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: ]] 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.335 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.336 nvme0n1 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjE4ZmRiMDNlNjg4MWQzOGE3NmVjNzRjYjM0NDIwMjKSb5ej: 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjE4ZmRiMDNlNjg4MWQzOGE3NmVjNzRjYjM0NDIwMjKSb5ej: 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: ]] 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.336 02:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.596 nvme0n1 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjExMzdiNTY2Nzc2MGMzZWZlNWI0YjIxY2MyY2M4ZWE0MzczNWM1YzA5OWY2ZmI4EtsZmg==: 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjExMzdiNTY2Nzc2MGMzZWZlNWI0YjIxY2MyY2M4ZWE0MzczNWM1YzA5OWY2ZmI4EtsZmg==: 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: ]] 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.596 nvme0n1 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.596 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.855 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.855 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.855 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.855 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.855 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.855 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.855 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.855 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:20:01.855 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.855 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:01.855 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:01.855 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:01.855 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2Y3NjdjYzZhYjA0OTViNTYzMDMzYjQxODgwOGM3OTA2NDY0OThmZDg2ZTU2N2FlNzcyNjY0YzYwYzdiYWI3Mjg1cH8=: 00:20:01.855 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:01.855 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:01.855 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:01.855 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2Y3NjdjYzZhYjA0OTViNTYzMDMzYjQxODgwOGM3OTA2NDY0OThmZDg2ZTU2N2FlNzcyNjY0YzYwYzdiYWI3Mjg1cH8=: 00:20:01.855 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:01.855 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:20:01.855 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.855 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:01.855 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:01.855 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:01.855 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.855 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.856 nvme0n1 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGFmYmEzZjdjNTUwYmYzNTQ4MzRhMzU4MGI1ZDEzNDA+xYmS: 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGFmYmEzZjdjNTUwYmYzNTQ4MzRhMzU4MGI1ZDEzNDA+xYmS: 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: ]] 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.856 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.116 nvme0n1 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1MDAzZmNmNDk1NjRkMmIzZmQ4ZTg2ZWIzODAwODI3M2IxZWQyMmFkOGJjZDY4y2wPdQ==: 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1MDAzZmNmNDk1NjRkMmIzZmQ4ZTg2ZWIzODAwODI3M2IxZWQyMmFkOGJjZDY4y2wPdQ==: 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: ]] 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.116 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.375 nvme0n1 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjE4ZmRiMDNlNjg4MWQzOGE3NmVjNzRjYjM0NDIwMjKSb5ej: 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjE4ZmRiMDNlNjg4MWQzOGE3NmVjNzRjYjM0NDIwMjKSb5ej: 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: ]] 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:02.375 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.376 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.376 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.635 nvme0n1 00:20:02.635 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.635 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.635 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.635 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.635 02:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjExMzdiNTY2Nzc2MGMzZWZlNWI0YjIxY2MyY2M4ZWE0MzczNWM1YzA5OWY2ZmI4EtsZmg==: 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjExMzdiNTY2Nzc2MGMzZWZlNWI0YjIxY2MyY2M4ZWE0MzczNWM1YzA5OWY2ZmI4EtsZmg==: 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: ]] 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.635 nvme0n1 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.635 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2Y3NjdjYzZhYjA0OTViNTYzMDMzYjQxODgwOGM3OTA2NDY0OThmZDg2ZTU2N2FlNzcyNjY0YzYwYzdiYWI3Mjg1cH8=: 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2Y3NjdjYzZhYjA0OTViNTYzMDMzYjQxODgwOGM3OTA2NDY0OThmZDg2ZTU2N2FlNzcyNjY0YzYwYzdiYWI3Mjg1cH8=: 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.895 nvme0n1 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.895 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGFmYmEzZjdjNTUwYmYzNTQ4MzRhMzU4MGI1ZDEzNDA+xYmS: 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGFmYmEzZjdjNTUwYmYzNTQ4MzRhMzU4MGI1ZDEzNDA+xYmS: 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: ]] 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.896 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.155 nvme0n1 00:20:03.155 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.155 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.155 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.155 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.155 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.155 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.155 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.155 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.155 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.155 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.155 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.155 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.155 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:20:03.155 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.155 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:03.155 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:03.155 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:03.155 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1MDAzZmNmNDk1NjRkMmIzZmQ4ZTg2ZWIzODAwODI3M2IxZWQyMmFkOGJjZDY4y2wPdQ==: 00:20:03.155 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: 00:20:03.155 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:03.155 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:03.155 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1MDAzZmNmNDk1NjRkMmIzZmQ4ZTg2ZWIzODAwODI3M2IxZWQyMmFkOGJjZDY4y2wPdQ==: 00:20:03.155 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: ]] 00:20:03.155 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: 00:20:03.155 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:20:03.155 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.155 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:03.155 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:03.155 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:03.155 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.156 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:03.156 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.156 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.156 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.156 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.156 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:03.156 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:03.156 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:03.156 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.156 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.156 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:03.156 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.156 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:03.156 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:03.156 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:03.156 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.156 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.156 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.415 nvme0n1 00:20:03.415 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.415 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.415 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.415 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.415 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.415 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.415 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.415 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.415 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.415 02:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjE4ZmRiMDNlNjg4MWQzOGE3NmVjNzRjYjM0NDIwMjKSb5ej: 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjE4ZmRiMDNlNjg4MWQzOGE3NmVjNzRjYjM0NDIwMjKSb5ej: 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: ]] 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.415 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.674 nvme0n1 00:20:03.674 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.674 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.674 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.674 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.674 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.674 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.674 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.674 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.674 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.674 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.674 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.674 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.674 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:20:03.674 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.674 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:03.674 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:03.674 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:03.674 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjExMzdiNTY2Nzc2MGMzZWZlNWI0YjIxY2MyY2M4ZWE0MzczNWM1YzA5OWY2ZmI4EtsZmg==: 00:20:03.674 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: 00:20:03.674 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:03.674 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:03.675 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjExMzdiNTY2Nzc2MGMzZWZlNWI0YjIxY2MyY2M4ZWE0MzczNWM1YzA5OWY2ZmI4EtsZmg==: 00:20:03.675 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: ]] 00:20:03.675 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: 00:20:03.675 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:20:03.675 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.675 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:03.675 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:03.675 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:03.675 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.675 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:03.675 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.675 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.675 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.675 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.675 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:03.675 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:03.675 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:03.675 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.675 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.675 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:03.675 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.675 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:03.675 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:03.675 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:03.675 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:03.675 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.934 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.934 nvme0n1 00:20:03.934 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.934 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.934 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.934 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.934 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.934 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.934 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.934 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.935 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.935 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.935 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.935 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.935 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:20:03.935 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.935 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:03.935 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:03.935 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:03.935 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2Y3NjdjYzZhYjA0OTViNTYzMDMzYjQxODgwOGM3OTA2NDY0OThmZDg2ZTU2N2FlNzcyNjY0YzYwYzdiYWI3Mjg1cH8=: 00:20:03.935 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:03.935 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:03.935 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:03.935 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2Y3NjdjYzZhYjA0OTViNTYzMDMzYjQxODgwOGM3OTA2NDY0OThmZDg2ZTU2N2FlNzcyNjY0YzYwYzdiYWI3Mjg1cH8=: 00:20:03.935 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:03.935 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:20:03.935 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.935 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:03.935 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:03.935 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:03.935 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.935 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:03.935 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.935 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.193 nvme0n1 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGFmYmEzZjdjNTUwYmYzNTQ4MzRhMzU4MGI1ZDEzNDA+xYmS: 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGFmYmEzZjdjNTUwYmYzNTQ4MzRhMzU4MGI1ZDEzNDA+xYmS: 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: ]] 00:20:04.193 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: 00:20:04.452 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:20:04.452 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.452 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:04.452 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:04.452 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:04.452 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.452 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:04.452 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.452 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.452 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.452 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.452 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:04.452 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:04.452 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:04.452 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.452 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.452 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:04.452 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.452 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:04.452 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:04.452 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:04.452 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.452 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.452 02:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.711 nvme0n1 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1MDAzZmNmNDk1NjRkMmIzZmQ4ZTg2ZWIzODAwODI3M2IxZWQyMmFkOGJjZDY4y2wPdQ==: 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1MDAzZmNmNDk1NjRkMmIzZmQ4ZTg2ZWIzODAwODI3M2IxZWQyMmFkOGJjZDY4y2wPdQ==: 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: ]] 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.711 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.970 nvme0n1 00:20:04.970 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.970 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.970 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.970 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.970 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.970 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.970 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.971 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.971 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.971 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.971 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.971 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.971 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:20:04.971 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.971 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:04.971 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:04.971 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:04.971 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjE4ZmRiMDNlNjg4MWQzOGE3NmVjNzRjYjM0NDIwMjKSb5ej: 00:20:04.971 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: 00:20:04.971 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:04.971 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:04.971 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjE4ZmRiMDNlNjg4MWQzOGE3NmVjNzRjYjM0NDIwMjKSb5ej: 00:20:04.971 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: ]] 00:20:04.971 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: 00:20:04.971 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:20:04.971 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.971 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:04.971 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:04.971 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:04.971 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.971 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:04.971 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.971 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.230 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.230 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.230 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:05.230 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:05.230 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:05.230 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.230 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.230 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:05.230 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.230 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:05.230 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:05.230 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:05.230 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.230 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.230 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.489 nvme0n1 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjExMzdiNTY2Nzc2MGMzZWZlNWI0YjIxY2MyY2M4ZWE0MzczNWM1YzA5OWY2ZmI4EtsZmg==: 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjExMzdiNTY2Nzc2MGMzZWZlNWI0YjIxY2MyY2M4ZWE0MzczNWM1YzA5OWY2ZmI4EtsZmg==: 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: ]] 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.489 02:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.749 nvme0n1 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2Y3NjdjYzZhYjA0OTViNTYzMDMzYjQxODgwOGM3OTA2NDY0OThmZDg2ZTU2N2FlNzcyNjY0YzYwYzdiYWI3Mjg1cH8=: 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2Y3NjdjYzZhYjA0OTViNTYzMDMzYjQxODgwOGM3OTA2NDY0OThmZDg2ZTU2N2FlNzcyNjY0YzYwYzdiYWI3Mjg1cH8=: 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.749 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.007 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:06.007 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.008 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:06.008 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:06.008 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:06.008 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:06.008 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.008 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.267 nvme0n1 00:20:06.267 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.267 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.267 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.267 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.267 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.267 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.267 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.267 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.267 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.267 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.267 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.267 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:06.267 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:06.267 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:20:06.267 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.267 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:06.267 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:06.267 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:06.267 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGFmYmEzZjdjNTUwYmYzNTQ4MzRhMzU4MGI1ZDEzNDA+xYmS: 00:20:06.267 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: 00:20:06.267 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:06.267 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:06.267 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGFmYmEzZjdjNTUwYmYzNTQ4MzRhMzU4MGI1ZDEzNDA+xYmS: 00:20:06.267 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: ]] 00:20:06.267 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:M2ZmZjc2YWVjZWI0NjI5MTMxZjM4ZDM1YmE1NTk0NmNjMjE4MjBiOWQyZjMyOWEwZjc5NGMzMjUzZGFmYjZhOdq8iEI=: 00:20:06.267 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:20:06.267 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:06.267 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:06.267 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:06.267 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:06.267 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:06.267 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:06.267 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.267 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.267 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.268 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:06.268 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:06.268 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:06.268 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:06.268 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.268 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.268 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:06.268 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.268 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:06.268 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:06.268 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:06.268 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.268 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.268 02:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.835 nvme0n1 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1MDAzZmNmNDk1NjRkMmIzZmQ4ZTg2ZWIzODAwODI3M2IxZWQyMmFkOGJjZDY4y2wPdQ==: 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1MDAzZmNmNDk1NjRkMmIzZmQ4ZTg2ZWIzODAwODI3M2IxZWQyMmFkOGJjZDY4y2wPdQ==: 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: ]] 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.835 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.403 nvme0n1 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjE4ZmRiMDNlNjg4MWQzOGE3NmVjNzRjYjM0NDIwMjKSb5ej: 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjE4ZmRiMDNlNjg4MWQzOGE3NmVjNzRjYjM0NDIwMjKSb5ej: 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: ]] 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.403 02:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.972 nvme0n1 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjExMzdiNTY2Nzc2MGMzZWZlNWI0YjIxY2MyY2M4ZWE0MzczNWM1YzA5OWY2ZmI4EtsZmg==: 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjExMzdiNTY2Nzc2MGMzZWZlNWI0YjIxY2MyY2M4ZWE0MzczNWM1YzA5OWY2ZmI4EtsZmg==: 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: ]] 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mzc2N2I3NzE2NjUwYjg5ZTFlZTg2ZWFjZTlkMzFlYzLaqI9C: 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.972 02:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.539 nvme0n1 00:20:08.539 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.539 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2Y3NjdjYzZhYjA0OTViNTYzMDMzYjQxODgwOGM3OTA2NDY0OThmZDg2ZTU2N2FlNzcyNjY0YzYwYzdiYWI3Mjg1cH8=: 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2Y3NjdjYzZhYjA0OTViNTYzMDMzYjQxODgwOGM3OTA2NDY0OThmZDg2ZTU2N2FlNzcyNjY0YzYwYzdiYWI3Mjg1cH8=: 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.540 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.106 nvme0n1 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1MDAzZmNmNDk1NjRkMmIzZmQ4ZTg2ZWIzODAwODI3M2IxZWQyMmFkOGJjZDY4y2wPdQ==: 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1MDAzZmNmNDk1NjRkMmIzZmQ4ZTg2ZWIzODAwODI3M2IxZWQyMmFkOGJjZDY4y2wPdQ==: 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: ]] 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.106 request: 00:20:09.106 { 00:20:09.106 "name": "nvme0", 00:20:09.106 "trtype": "tcp", 00:20:09.106 "traddr": "10.0.0.1", 00:20:09.106 "adrfam": "ipv4", 00:20:09.106 "trsvcid": "4420", 00:20:09.106 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:09.106 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:09.106 "prchk_reftag": false, 00:20:09.106 "prchk_guard": false, 00:20:09.106 "hdgst": false, 00:20:09.106 "ddgst": false, 00:20:09.106 "allow_unrecognized_csi": false, 00:20:09.106 "method": "bdev_nvme_attach_controller", 00:20:09.106 "req_id": 1 00:20:09.106 } 00:20:09.106 Got JSON-RPC error response 00:20:09.106 response: 00:20:09.106 { 00:20:09.106 "code": -5, 00:20:09.106 "message": "Input/output error" 00:20:09.106 } 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:09.106 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:09.107 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.107 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.107 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:20:09.107 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.365 request: 00:20:09.365 { 00:20:09.365 "name": "nvme0", 00:20:09.365 "trtype": "tcp", 00:20:09.365 "traddr": "10.0.0.1", 00:20:09.365 "adrfam": "ipv4", 00:20:09.365 "trsvcid": "4420", 00:20:09.365 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:09.365 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:09.365 "prchk_reftag": false, 00:20:09.365 "prchk_guard": false, 00:20:09.365 "hdgst": false, 00:20:09.365 "ddgst": false, 00:20:09.365 "dhchap_key": "key2", 00:20:09.365 "allow_unrecognized_csi": false, 00:20:09.365 "method": "bdev_nvme_attach_controller", 00:20:09.365 "req_id": 1 00:20:09.365 } 00:20:09.365 Got JSON-RPC error response 00:20:09.365 response: 00:20:09.365 { 00:20:09.365 "code": -5, 00:20:09.365 "message": "Input/output error" 00:20:09.365 } 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:09.365 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:09.366 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:09.366 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:09.366 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:09.366 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:09.366 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:09.366 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:09.366 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.366 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.366 request: 00:20:09.366 { 00:20:09.366 "name": "nvme0", 00:20:09.366 "trtype": "tcp", 00:20:09.366 "traddr": "10.0.0.1", 00:20:09.366 "adrfam": "ipv4", 00:20:09.366 "trsvcid": "4420", 00:20:09.366 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:09.366 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:09.366 "prchk_reftag": false, 00:20:09.366 "prchk_guard": false, 00:20:09.366 "hdgst": false, 00:20:09.366 "ddgst": false, 00:20:09.366 "dhchap_key": "key1", 00:20:09.366 "dhchap_ctrlr_key": "ckey2", 00:20:09.366 "allow_unrecognized_csi": false, 00:20:09.366 "method": "bdev_nvme_attach_controller", 00:20:09.366 "req_id": 1 00:20:09.366 } 00:20:09.366 Got JSON-RPC error response 00:20:09.366 response: 00:20:09.366 { 00:20:09.366 "code": -5, 00:20:09.366 "message": "Input/output error" 00:20:09.366 } 00:20:09.366 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:09.366 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:09.366 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:09.366 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:09.366 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:09.366 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:20:09.366 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:09.366 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:09.366 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:09.366 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.366 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.366 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:09.366 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.366 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:09.366 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:09.366 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:09.366 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:09.366 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.366 02:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.624 nvme0n1 00:20:09.624 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.624 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:09.624 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:09.624 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:09.624 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:09.624 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:09.624 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjE4ZmRiMDNlNjg4MWQzOGE3NmVjNzRjYjM0NDIwMjKSb5ej: 00:20:09.624 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: 00:20:09.624 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:09.624 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:09.624 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjE4ZmRiMDNlNjg4MWQzOGE3NmVjNzRjYjM0NDIwMjKSb5ej: 00:20:09.624 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: ]] 00:20:09.624 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: 00:20:09.624 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.624 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.624 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.624 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.624 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.624 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.624 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.624 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:20:09.624 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.624 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.624 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:09.624 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:09.625 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:09.625 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:09.625 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:09.625 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:09.625 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:09.625 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:09.625 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.625 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.625 request: 00:20:09.625 { 00:20:09.625 "name": "nvme0", 00:20:09.625 "dhchap_key": "key1", 00:20:09.625 "dhchap_ctrlr_key": "ckey2", 00:20:09.625 "method": "bdev_nvme_set_keys", 00:20:09.625 "req_id": 1 00:20:09.625 } 00:20:09.625 Got JSON-RPC error response 00:20:09.625 response: 00:20:09.625 { 00:20:09.625 "code": -13, 00:20:09.625 "message": "Permission denied" 00:20:09.625 } 00:20:09.625 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:09.625 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:09.625 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:09.625 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:09.625 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:09.625 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.625 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:20:09.625 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.625 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.625 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.625 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:20:09.625 02:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1MDAzZmNmNDk1NjRkMmIzZmQ4ZTg2ZWIzODAwODI3M2IxZWQyMmFkOGJjZDY4y2wPdQ==: 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1MDAzZmNmNDk1NjRkMmIzZmQ4ZTg2ZWIzODAwODI3M2IxZWQyMmFkOGJjZDY4y2wPdQ==: 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: ]] 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Zjg5NWYwYWUxNTBkODRhNDhjZjQyMDg0MWQ2ZTIyMjQzMjQ2MjA2YmQ4ZGQ0ZjE5K1dwDA==: 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.001 nvme0n1 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjE4ZmRiMDNlNjg4MWQzOGE3NmVjNzRjYjM0NDIwMjKSb5ej: 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjE4ZmRiMDNlNjg4MWQzOGE3NmVjNzRjYjM0NDIwMjKSb5ej: 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: ]] 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTkyMzQ5OWY4YmQ5NzNkMWU3MjQ5N2FmZDcwNzA0YThnLmsO: 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.001 request: 00:20:11.001 { 00:20:11.001 "name": "nvme0", 00:20:11.001 "dhchap_key": "key2", 00:20:11.001 "dhchap_ctrlr_key": "ckey1", 00:20:11.001 "method": "bdev_nvme_set_keys", 00:20:11.001 "req_id": 1 00:20:11.001 } 00:20:11.001 Got JSON-RPC error response 00:20:11.001 response: 00:20:11.001 { 00:20:11.001 "code": -13, 00:20:11.001 "message": "Permission denied" 00:20:11.001 } 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:20:11.001 02:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:20:11.937 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:20:11.937 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:20:11.937 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.937 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.937 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.937 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:20:11.937 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:20:11.937 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:20:11.937 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:20:11.937 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:11.937 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:20:11.937 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:11.937 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:20:11.937 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:11.937 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:11.937 rmmod nvme_tcp 00:20:11.937 rmmod nvme_fabrics 00:20:11.937 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:11.937 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:20:11.937 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:20:11.937 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 92899 ']' 00:20:11.937 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 92899 00:20:11.937 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 92899 ']' 00:20:11.937 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 92899 00:20:11.937 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:20:11.937 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:11.937 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92899 00:20:12.196 killing process with pid 92899 00:20:12.196 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:12.196 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:12.196 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92899' 00:20:12.196 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 92899 00:20:12.196 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 92899 00:20:12.196 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:12.196 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:12.196 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:12.196 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:20:12.196 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:20:12.196 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:12.196 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:20:12.196 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:12.196 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:12.196 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:12.196 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:12.196 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:12.196 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:12.196 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:12.196 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:12.196 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:12.196 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:12.196 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:12.455 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:12.455 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:12.455 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:12.455 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:12.455 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:12.455 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.455 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:12.455 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.455 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:20:12.455 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:12.455 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:12.455 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:20:12.455 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:20:12.455 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:20:12.455 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:12.455 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:12.455 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:12.455 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:12.455 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:20:12.455 02:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:20:12.455 02:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:13.457 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:13.457 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:13.457 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:13.457 02:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.XHT /tmp/spdk.key-null.L21 /tmp/spdk.key-sha256.l5d /tmp/spdk.key-sha384.Pmi /tmp/spdk.key-sha512.Gs0 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:20:13.457 02:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:13.740 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:13.741 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:13.741 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:13.741 00:20:13.741 real 0m34.979s 00:20:13.741 user 0m32.507s 00:20:13.741 sys 0m3.769s 00:20:13.741 02:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:13.741 02:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.741 ************************************ 00:20:13.741 END TEST nvmf_auth_host 00:20:13.741 ************************************ 00:20:13.741 02:01:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:20:13.741 02:01:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:13.741 02:01:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:13.741 02:01:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:13.741 02:01:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.741 ************************************ 00:20:13.741 START TEST nvmf_digest 00:20:13.741 ************************************ 00:20:13.741 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:14.001 * Looking for test storage... 00:20:14.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:14.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.001 --rc genhtml_branch_coverage=1 00:20:14.001 --rc genhtml_function_coverage=1 00:20:14.001 --rc genhtml_legend=1 00:20:14.001 --rc geninfo_all_blocks=1 00:20:14.001 --rc geninfo_unexecuted_blocks=1 00:20:14.001 00:20:14.001 ' 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:14.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.001 --rc genhtml_branch_coverage=1 00:20:14.001 --rc genhtml_function_coverage=1 00:20:14.001 --rc genhtml_legend=1 00:20:14.001 --rc geninfo_all_blocks=1 00:20:14.001 --rc geninfo_unexecuted_blocks=1 00:20:14.001 00:20:14.001 ' 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:14.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.001 --rc genhtml_branch_coverage=1 00:20:14.001 --rc genhtml_function_coverage=1 00:20:14.001 --rc genhtml_legend=1 00:20:14.001 --rc geninfo_all_blocks=1 00:20:14.001 --rc geninfo_unexecuted_blocks=1 00:20:14.001 00:20:14.001 ' 00:20:14.001 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:14.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.001 --rc genhtml_branch_coverage=1 00:20:14.001 --rc genhtml_function_coverage=1 00:20:14.001 --rc genhtml_legend=1 00:20:14.002 --rc geninfo_all_blocks=1 00:20:14.002 --rc geninfo_unexecuted_blocks=1 00:20:14.002 00:20:14.002 ' 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:14.002 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:14.002 Cannot find device "nvmf_init_br" 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:14.002 Cannot find device "nvmf_init_br2" 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:20:14.002 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:14.002 Cannot find device "nvmf_tgt_br" 00:20:14.262 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:20:14.262 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:14.262 Cannot find device "nvmf_tgt_br2" 00:20:14.262 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:20:14.262 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:14.262 Cannot find device "nvmf_init_br" 00:20:14.262 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:20:14.262 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:14.262 Cannot find device "nvmf_init_br2" 00:20:14.262 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:20:14.262 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:14.262 Cannot find device "nvmf_tgt_br" 00:20:14.262 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:20:14.262 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:14.262 Cannot find device "nvmf_tgt_br2" 00:20:14.262 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:20:14.262 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:14.262 Cannot find device "nvmf_br" 00:20:14.262 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:20:14.262 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:14.262 Cannot find device "nvmf_init_if" 00:20:14.262 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:20:14.262 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:14.262 Cannot find device "nvmf_init_if2" 00:20:14.262 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:20:14.262 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:14.262 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:14.262 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:20:14.262 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:14.262 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:14.262 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:20:14.262 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:14.262 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:14.262 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:14.262 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:14.262 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:14.262 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:14.262 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:14.262 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:14.262 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:14.262 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:14.262 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:14.262 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:14.262 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:14.262 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:14.522 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:14.522 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:14.522 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:14.522 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:14.522 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:14.522 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:14.522 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:14.522 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:14.522 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:14.522 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:14.522 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:14.522 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:14.522 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:14.522 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:14.522 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:14.522 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:14.522 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:14.522 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:14.522 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:14.522 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:14.522 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:20:14.522 00:20:14.522 --- 10.0.0.3 ping statistics --- 00:20:14.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.522 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:20:14.522 02:01:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:14.522 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:14.522 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:20:14.522 00:20:14.522 --- 10.0.0.4 ping statistics --- 00:20:14.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.522 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:20:14.522 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:14.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:14.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:20:14.522 00:20:14.522 --- 10.0.0.1 ping statistics --- 00:20:14.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.522 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:20:14.522 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:14.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:14.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:20:14.522 00:20:14.522 --- 10.0.0.2 ping statistics --- 00:20:14.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.522 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:20:14.522 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:14.522 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:20:14.522 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:14.522 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:14.522 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:14.522 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:14.522 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:14.522 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:14.522 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:14.522 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:14.522 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:20:14.522 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:20:14.523 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:14.523 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:14.523 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:14.523 ************************************ 00:20:14.523 START TEST nvmf_digest_clean 00:20:14.523 ************************************ 00:20:14.523 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:20:14.523 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:20:14.523 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:20:14.523 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:20:14.523 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:20:14.523 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:20:14.523 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:14.523 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:14.523 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:14.523 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=94521 00:20:14.523 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 94521 00:20:14.523 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:14.523 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94521 ']' 00:20:14.523 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.523 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:14.523 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.523 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:14.523 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:14.523 [2024-11-19 02:01:25.114071] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:20:14.523 [2024-11-19 02:01:25.114170] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.782 [2024-11-19 02:01:25.268806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.782 [2024-11-19 02:01:25.292261] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.782 [2024-11-19 02:01:25.292327] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.782 [2024-11-19 02:01:25.292343] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.782 [2024-11-19 02:01:25.292353] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.782 [2024-11-19 02:01:25.292361] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.782 [2024-11-19 02:01:25.292730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.782 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:14.782 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:14.782 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:14.782 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:14.782 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:15.042 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:15.042 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:20:15.042 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:20:15.042 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:20:15.042 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.042 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:15.042 [2024-11-19 02:01:25.448239] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:15.042 null0 00:20:15.042 [2024-11-19 02:01:25.483662] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:15.042 [2024-11-19 02:01:25.507807] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:15.042 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.042 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:20:15.042 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:15.042 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:15.042 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:15.042 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:15.042 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:15.042 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:15.042 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94545 00:20:15.042 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:15.042 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94545 /var/tmp/bperf.sock 00:20:15.042 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94545 ']' 00:20:15.042 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:15.042 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:15.042 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:15.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:15.042 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:15.042 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:15.042 [2024-11-19 02:01:25.572220] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:20:15.042 [2024-11-19 02:01:25.572479] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94545 ] 00:20:15.301 [2024-11-19 02:01:25.732599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.301 [2024-11-19 02:01:25.757113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.301 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:15.301 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:15.301 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:15.301 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:15.301 02:01:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:15.561 [2024-11-19 02:01:26.156928] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:15.820 02:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:15.820 02:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:16.079 nvme0n1 00:20:16.079 02:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:16.079 02:01:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:16.079 Running I/O for 2 seconds... 00:20:18.395 17653.00 IOPS, 68.96 MiB/s [2024-11-19T02:01:29.010Z] 17843.50 IOPS, 69.70 MiB/s 00:20:18.395 Latency(us) 00:20:18.395 [2024-11-19T02:01:29.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.395 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:18.395 nvme0n1 : 2.00 17872.01 69.81 0.00 0.00 7157.12 6732.33 17635.14 00:20:18.395 [2024-11-19T02:01:29.010Z] =================================================================================================================== 00:20:18.395 [2024-11-19T02:01:29.010Z] Total : 17872.01 69.81 0.00 0.00 7157.12 6732.33 17635.14 00:20:18.395 { 00:20:18.395 "results": [ 00:20:18.395 { 00:20:18.395 "job": "nvme0n1", 00:20:18.395 "core_mask": "0x2", 00:20:18.395 "workload": "randread", 00:20:18.395 "status": "finished", 00:20:18.395 "queue_depth": 128, 00:20:18.395 "io_size": 4096, 00:20:18.395 "runtime": 2.003972, 00:20:18.395 "iops": 17872.00619569535, 00:20:18.395 "mibps": 69.81252420193496, 00:20:18.395 "io_failed": 0, 00:20:18.395 "io_timeout": 0, 00:20:18.395 "avg_latency_us": 7157.121658624496, 00:20:18.395 "min_latency_us": 6732.334545454545, 00:20:18.395 "max_latency_us": 17635.14181818182 00:20:18.395 } 00:20:18.395 ], 00:20:18.395 "core_count": 1 00:20:18.395 } 00:20:18.395 02:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:18.395 02:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:18.395 02:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:18.395 02:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:18.395 02:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:18.395 | select(.opcode=="crc32c") 00:20:18.395 | "\(.module_name) \(.executed)"' 00:20:18.395 02:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:18.395 02:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:18.395 02:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:18.395 02:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:18.395 02:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94545 00:20:18.395 02:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94545 ']' 00:20:18.395 02:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94545 00:20:18.395 02:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:18.395 02:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:18.395 02:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94545 00:20:18.395 killing process with pid 94545 00:20:18.395 Received shutdown signal, test time was about 2.000000 seconds 00:20:18.395 00:20:18.395 Latency(us) 00:20:18.395 [2024-11-19T02:01:29.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.395 [2024-11-19T02:01:29.010Z] =================================================================================================================== 00:20:18.395 [2024-11-19T02:01:29.010Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:18.395 02:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:18.395 02:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:18.395 02:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94545' 00:20:18.395 02:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94545 00:20:18.395 02:01:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94545 00:20:18.654 02:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:20:18.654 02:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:18.654 02:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:18.654 02:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:18.654 02:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:18.654 02:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:18.654 02:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:18.654 02:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94598 00:20:18.655 02:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94598 /var/tmp/bperf.sock 00:20:18.655 02:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:18.655 02:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94598 ']' 00:20:18.655 02:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:18.655 02:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:18.655 02:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:18.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:18.655 02:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:18.655 02:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:18.655 [2024-11-19 02:01:29.170528] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:20:18.655 [2024-11-19 02:01:29.170841] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94598 ] 00:20:18.655 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:18.655 Zero copy mechanism will not be used. 00:20:18.913 [2024-11-19 02:01:29.308688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.913 [2024-11-19 02:01:29.327980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.913 02:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:18.913 02:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:18.913 02:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:18.913 02:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:18.913 02:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:19.173 [2024-11-19 02:01:29.639851] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:19.173 02:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:19.173 02:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:19.432 nvme0n1 00:20:19.432 02:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:19.433 02:01:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:19.691 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:19.691 Zero copy mechanism will not be used. 00:20:19.691 Running I/O for 2 seconds... 00:20:21.562 8768.00 IOPS, 1096.00 MiB/s [2024-11-19T02:01:32.177Z] 8712.00 IOPS, 1089.00 MiB/s 00:20:21.562 Latency(us) 00:20:21.562 [2024-11-19T02:01:32.177Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.562 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:21.562 nvme0n1 : 2.00 8710.54 1088.82 0.00 0.00 1834.04 1653.29 3395.96 00:20:21.562 [2024-11-19T02:01:32.177Z] =================================================================================================================== 00:20:21.562 [2024-11-19T02:01:32.177Z] Total : 8710.54 1088.82 0.00 0.00 1834.04 1653.29 3395.96 00:20:21.562 { 00:20:21.562 "results": [ 00:20:21.562 { 00:20:21.562 "job": "nvme0n1", 00:20:21.562 "core_mask": "0x2", 00:20:21.562 "workload": "randread", 00:20:21.562 "status": "finished", 00:20:21.562 "queue_depth": 16, 00:20:21.562 "io_size": 131072, 00:20:21.562 "runtime": 2.002172, 00:20:21.562 "iops": 8710.54035317645, 00:20:21.562 "mibps": 1088.8175441470562, 00:20:21.562 "io_failed": 0, 00:20:21.562 "io_timeout": 0, 00:20:21.562 "avg_latency_us": 1834.0388723936614, 00:20:21.562 "min_latency_us": 1653.2945454545454, 00:20:21.562 "max_latency_us": 3395.9563636363637 00:20:21.562 } 00:20:21.562 ], 00:20:21.562 "core_count": 1 00:20:21.562 } 00:20:21.562 02:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:21.562 02:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:21.562 02:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:21.562 02:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:21.562 02:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:21.562 | select(.opcode=="crc32c") 00:20:21.562 | "\(.module_name) \(.executed)"' 00:20:21.820 02:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:21.820 02:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:21.820 02:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:21.820 02:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:21.820 02:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94598 00:20:21.820 02:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94598 ']' 00:20:21.820 02:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94598 00:20:21.820 02:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:21.820 02:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:21.820 02:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94598 00:20:21.820 02:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:21.820 02:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:21.820 02:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94598' 00:20:21.820 killing process with pid 94598 00:20:21.820 02:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94598 00:20:21.820 Received shutdown signal, test time was about 2.000000 seconds 00:20:21.820 00:20:21.820 Latency(us) 00:20:21.820 [2024-11-19T02:01:32.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.820 [2024-11-19T02:01:32.436Z] =================================================================================================================== 00:20:21.821 [2024-11-19T02:01:32.436Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:21.821 02:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94598 00:20:22.079 02:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:20:22.079 02:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:22.079 02:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:22.079 02:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:22.079 02:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:22.079 02:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:22.079 02:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:22.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:22.079 02:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94645 00:20:22.079 02:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94645 /var/tmp/bperf.sock 00:20:22.079 02:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94645 ']' 00:20:22.079 02:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:22.079 02:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:22.079 02:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:22.079 02:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:22.079 02:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:22.079 02:01:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:22.079 [2024-11-19 02:01:32.601912] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:20:22.079 [2024-11-19 02:01:32.602032] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94645 ] 00:20:22.339 [2024-11-19 02:01:32.739076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.339 [2024-11-19 02:01:32.758194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:23.276 02:01:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:23.276 02:01:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:23.276 02:01:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:23.276 02:01:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:23.276 02:01:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:23.276 [2024-11-19 02:01:33.833539] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:23.276 02:01:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:23.276 02:01:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:23.844 nvme0n1 00:20:23.844 02:01:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:23.844 02:01:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:23.844 Running I/O for 2 seconds... 00:20:25.720 19051.00 IOPS, 74.42 MiB/s [2024-11-19T02:01:36.335Z] 19177.50 IOPS, 74.91 MiB/s 00:20:25.720 Latency(us) 00:20:25.720 [2024-11-19T02:01:36.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.720 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:25.720 nvme0n1 : 2.00 19199.79 75.00 0.00 0.00 6660.36 6136.55 15490.33 00:20:25.720 [2024-11-19T02:01:36.335Z] =================================================================================================================== 00:20:25.720 [2024-11-19T02:01:36.335Z] Total : 19199.79 75.00 0.00 0.00 6660.36 6136.55 15490.33 00:20:25.720 { 00:20:25.720 "results": [ 00:20:25.720 { 00:20:25.720 "job": "nvme0n1", 00:20:25.720 "core_mask": "0x2", 00:20:25.720 "workload": "randwrite", 00:20:25.720 "status": "finished", 00:20:25.720 "queue_depth": 128, 00:20:25.720 "io_size": 4096, 00:20:25.720 "runtime": 2.004345, 00:20:25.720 "iops": 19199.78845957158, 00:20:25.720 "mibps": 74.99917367020149, 00:20:25.720 "io_failed": 0, 00:20:25.720 "io_timeout": 0, 00:20:25.720 "avg_latency_us": 6660.362110873041, 00:20:25.720 "min_latency_us": 6136.552727272728, 00:20:25.720 "max_latency_us": 15490.327272727272 00:20:25.720 } 00:20:25.720 ], 00:20:25.720 "core_count": 1 00:20:25.720 } 00:20:25.720 02:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:25.720 02:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:25.720 02:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:25.720 02:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:25.720 | select(.opcode=="crc32c") 00:20:25.720 | "\(.module_name) \(.executed)"' 00:20:25.720 02:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:26.288 02:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:26.288 02:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:26.288 02:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:26.288 02:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:26.288 02:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94645 00:20:26.288 02:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94645 ']' 00:20:26.288 02:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94645 00:20:26.288 02:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:26.288 02:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:26.288 02:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94645 00:20:26.288 killing process with pid 94645 00:20:26.288 Received shutdown signal, test time was about 2.000000 seconds 00:20:26.288 00:20:26.288 Latency(us) 00:20:26.288 [2024-11-19T02:01:36.903Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.288 [2024-11-19T02:01:36.903Z] =================================================================================================================== 00:20:26.288 [2024-11-19T02:01:36.903Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:26.288 02:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:26.288 02:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:26.288 02:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94645' 00:20:26.288 02:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94645 00:20:26.288 02:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94645 00:20:26.288 02:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:20:26.288 02:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:26.288 02:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:26.288 02:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:26.288 02:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:26.288 02:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:26.288 02:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:26.288 02:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94701 00:20:26.288 02:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94701 /var/tmp/bperf.sock 00:20:26.288 02:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:26.288 02:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94701 ']' 00:20:26.288 02:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:26.288 02:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:26.289 02:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:26.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:26.289 02:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:26.289 02:01:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:26.289 [2024-11-19 02:01:36.804488] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:20:26.289 [2024-11-19 02:01:36.804807] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94701 ] 00:20:26.289 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:26.289 Zero copy mechanism will not be used. 00:20:26.548 [2024-11-19 02:01:36.946264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.548 [2024-11-19 02:01:36.965172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:27.485 02:01:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:27.485 02:01:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:27.485 02:01:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:27.485 02:01:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:27.485 02:01:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:27.485 [2024-11-19 02:01:37.988577] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:27.485 02:01:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:27.485 02:01:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:27.745 nvme0n1 00:20:27.745 02:01:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:27.745 02:01:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:28.004 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:28.004 Zero copy mechanism will not be used. 00:20:28.004 Running I/O for 2 seconds... 00:20:29.878 7469.00 IOPS, 933.62 MiB/s [2024-11-19T02:01:40.493Z] 7421.50 IOPS, 927.69 MiB/s 00:20:29.878 Latency(us) 00:20:29.878 [2024-11-19T02:01:40.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.878 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:29.878 nvme0n1 : 2.00 7419.10 927.39 0.00 0.00 2151.85 1757.56 10128.29 00:20:29.878 [2024-11-19T02:01:40.493Z] =================================================================================================================== 00:20:29.878 [2024-11-19T02:01:40.493Z] Total : 7419.10 927.39 0.00 0.00 2151.85 1757.56 10128.29 00:20:29.878 { 00:20:29.878 "results": [ 00:20:29.878 { 00:20:29.878 "job": "nvme0n1", 00:20:29.878 "core_mask": "0x2", 00:20:29.878 "workload": "randwrite", 00:20:29.878 "status": "finished", 00:20:29.878 "queue_depth": 16, 00:20:29.878 "io_size": 131072, 00:20:29.878 "runtime": 2.003478, 00:20:29.878 "iops": 7419.098188250632, 00:20:29.878 "mibps": 927.387273531329, 00:20:29.878 "io_failed": 0, 00:20:29.878 "io_timeout": 0, 00:20:29.878 "avg_latency_us": 2151.8517271748706, 00:20:29.878 "min_latency_us": 1757.5563636363636, 00:20:29.878 "max_latency_us": 10128.290909090909 00:20:29.878 } 00:20:29.878 ], 00:20:29.878 "core_count": 1 00:20:29.878 } 00:20:29.878 02:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:29.878 02:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:29.878 02:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:29.878 | select(.opcode=="crc32c") 00:20:29.878 | "\(.module_name) \(.executed)"' 00:20:29.878 02:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:29.878 02:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:30.137 02:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:30.138 02:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:30.138 02:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:30.138 02:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:30.138 02:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94701 00:20:30.138 02:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94701 ']' 00:20:30.138 02:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94701 00:20:30.138 02:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:30.138 02:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:30.138 02:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94701 00:20:30.398 killing process with pid 94701 00:20:30.398 Received shutdown signal, test time was about 2.000000 seconds 00:20:30.398 00:20:30.398 Latency(us) 00:20:30.398 [2024-11-19T02:01:41.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.398 [2024-11-19T02:01:41.013Z] =================================================================================================================== 00:20:30.398 [2024-11-19T02:01:41.013Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:30.398 02:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:30.398 02:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:30.398 02:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94701' 00:20:30.398 02:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94701 00:20:30.398 02:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94701 00:20:30.398 02:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 94521 00:20:30.398 02:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94521 ']' 00:20:30.398 02:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94521 00:20:30.398 02:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:30.398 02:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:30.398 02:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94521 00:20:30.398 killing process with pid 94521 00:20:30.398 02:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:30.398 02:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:30.398 02:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94521' 00:20:30.398 02:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94521 00:20:30.398 02:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94521 00:20:30.657 ************************************ 00:20:30.657 END TEST nvmf_digest_clean 00:20:30.657 ************************************ 00:20:30.657 00:20:30.657 real 0m15.984s 00:20:30.657 user 0m31.636s 00:20:30.657 sys 0m4.286s 00:20:30.657 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:30.657 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:30.657 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:20:30.657 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:30.657 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:30.657 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:30.657 ************************************ 00:20:30.657 START TEST nvmf_digest_error 00:20:30.657 ************************************ 00:20:30.657 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:20:30.657 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:20:30.657 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:30.657 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:30.657 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:30.657 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=94784 00:20:30.657 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 94784 00:20:30.657 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 94784 ']' 00:20:30.657 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.657 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:30.657 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:30.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.657 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.657 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:30.657 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:30.657 [2024-11-19 02:01:41.129604] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:20:30.657 [2024-11-19 02:01:41.129845] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.657 [2024-11-19 02:01:41.268190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.917 [2024-11-19 02:01:41.287282] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:30.917 [2024-11-19 02:01:41.287337] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:30.917 [2024-11-19 02:01:41.287362] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:30.917 [2024-11-19 02:01:41.287369] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:30.917 [2024-11-19 02:01:41.287375] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:30.917 [2024-11-19 02:01:41.287688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.917 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:30.917 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:30.917 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:30.917 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:30.917 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:30.917 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:30.917 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:20:30.917 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.917 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:30.917 [2024-11-19 02:01:41.416086] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:20:30.917 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.917 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:20:30.917 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:20:30.917 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.917 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:30.917 [2024-11-19 02:01:41.449755] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:30.917 null0 00:20:30.917 [2024-11-19 02:01:41.480166] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:30.917 [2024-11-19 02:01:41.504272] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:30.917 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.917 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:20:30.917 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:30.917 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:30.917 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:30.917 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:30.917 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:20:30.917 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94809 00:20:30.918 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94809 /var/tmp/bperf.sock 00:20:30.918 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 94809 ']' 00:20:30.918 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:30.918 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:30.918 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:30.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:30.918 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:30.918 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:31.177 [2024-11-19 02:01:41.558748] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:20:31.177 [2024-11-19 02:01:41.558829] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94809 ] 00:20:31.177 [2024-11-19 02:01:41.694689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.177 [2024-11-19 02:01:41.713810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:31.177 [2024-11-19 02:01:41.742123] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:31.177 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:31.177 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:31.177 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:31.177 02:01:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:31.437 02:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:31.437 02:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.437 02:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:31.437 02:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.437 02:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:31.437 02:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:32.004 nvme0n1 00:20:32.004 02:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:32.004 02:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.005 02:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:32.005 02:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.005 02:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:32.005 02:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:32.005 Running I/O for 2 seconds... 00:20:32.005 [2024-11-19 02:01:42.521507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.005 [2024-11-19 02:01:42.521569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.005 [2024-11-19 02:01:42.521600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.005 [2024-11-19 02:01:42.535653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.005 [2024-11-19 02:01:42.535689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.005 [2024-11-19 02:01:42.535719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.005 [2024-11-19 02:01:42.549932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.005 [2024-11-19 02:01:42.549994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.005 [2024-11-19 02:01:42.550023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.005 [2024-11-19 02:01:42.564402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.005 [2024-11-19 02:01:42.564437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.005 [2024-11-19 02:01:42.564467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.005 [2024-11-19 02:01:42.578929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.005 [2024-11-19 02:01:42.578963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.005 [2024-11-19 02:01:42.578993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.005 [2024-11-19 02:01:42.593040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.005 [2024-11-19 02:01:42.593075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.005 [2024-11-19 02:01:42.593104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.005 [2024-11-19 02:01:42.607359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.005 [2024-11-19 02:01:42.607393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.005 [2024-11-19 02:01:42.607422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.266 [2024-11-19 02:01:42.622168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.266 [2024-11-19 02:01:42.622452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.266 [2024-11-19 02:01:42.622470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.266 [2024-11-19 02:01:42.637211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.266 [2024-11-19 02:01:42.637247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.266 [2024-11-19 02:01:42.637276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.266 [2024-11-19 02:01:42.651560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.266 [2024-11-19 02:01:42.651597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.266 [2024-11-19 02:01:42.651626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.266 [2024-11-19 02:01:42.665449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.266 [2024-11-19 02:01:42.665485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.266 [2024-11-19 02:01:42.665542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.266 [2024-11-19 02:01:42.679704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.266 [2024-11-19 02:01:42.679738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.266 [2024-11-19 02:01:42.679767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.266 [2024-11-19 02:01:42.693795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.266 [2024-11-19 02:01:42.693830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.266 [2024-11-19 02:01:42.693859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.266 [2024-11-19 02:01:42.707929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.266 [2024-11-19 02:01:42.707963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.266 [2024-11-19 02:01:42.707991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.266 [2024-11-19 02:01:42.721977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.266 [2024-11-19 02:01:42.722013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.267 [2024-11-19 02:01:42.722042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.267 [2024-11-19 02:01:42.736194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.267 [2024-11-19 02:01:42.736230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.267 [2024-11-19 02:01:42.736260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.267 [2024-11-19 02:01:42.750612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.267 [2024-11-19 02:01:42.750646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.267 [2024-11-19 02:01:42.750674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.267 [2024-11-19 02:01:42.764770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.267 [2024-11-19 02:01:42.764805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.267 [2024-11-19 02:01:42.764833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.267 [2024-11-19 02:01:42.778976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.267 [2024-11-19 02:01:42.779010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.267 [2024-11-19 02:01:42.779038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.267 [2024-11-19 02:01:42.793135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.267 [2024-11-19 02:01:42.793169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.267 [2024-11-19 02:01:42.793198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.267 [2024-11-19 02:01:42.807470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.267 [2024-11-19 02:01:42.807530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.267 [2024-11-19 02:01:42.807560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.267 [2024-11-19 02:01:42.821631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.267 [2024-11-19 02:01:42.821809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.267 [2024-11-19 02:01:42.821842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.267 [2024-11-19 02:01:42.836063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.267 [2024-11-19 02:01:42.836097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.267 [2024-11-19 02:01:42.836126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.267 [2024-11-19 02:01:42.850225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.267 [2024-11-19 02:01:42.850307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.267 [2024-11-19 02:01:42.850320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.267 [2024-11-19 02:01:42.864227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.267 [2024-11-19 02:01:42.864260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.267 [2024-11-19 02:01:42.864288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.267 [2024-11-19 02:01:42.878691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.267 [2024-11-19 02:01:42.878723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.267 [2024-11-19 02:01:42.878752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.526 [2024-11-19 02:01:42.893925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.526 [2024-11-19 02:01:42.894002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.526 [2024-11-19 02:01:42.894033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.527 [2024-11-19 02:01:42.908348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.527 [2024-11-19 02:01:42.908382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.527 [2024-11-19 02:01:42.908410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.527 [2024-11-19 02:01:42.922653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.527 [2024-11-19 02:01:42.922686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.527 [2024-11-19 02:01:42.922715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.527 [2024-11-19 02:01:42.936626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.527 [2024-11-19 02:01:42.936659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.527 [2024-11-19 02:01:42.936688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.527 [2024-11-19 02:01:42.950779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.527 [2024-11-19 02:01:42.950824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.527 [2024-11-19 02:01:42.950854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.527 [2024-11-19 02:01:42.964825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.527 [2024-11-19 02:01:42.964859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.527 [2024-11-19 02:01:42.964887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.527 [2024-11-19 02:01:42.978850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.527 [2024-11-19 02:01:42.978883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.527 [2024-11-19 02:01:42.978911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.527 [2024-11-19 02:01:42.992866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.527 [2024-11-19 02:01:42.992900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.527 [2024-11-19 02:01:42.992929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.527 [2024-11-19 02:01:43.006835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.527 [2024-11-19 02:01:43.006868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.527 [2024-11-19 02:01:43.006896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.527 [2024-11-19 02:01:43.020851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.527 [2024-11-19 02:01:43.020883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.527 [2024-11-19 02:01:43.020911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.527 [2024-11-19 02:01:43.034942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.527 [2024-11-19 02:01:43.034975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.527 [2024-11-19 02:01:43.035004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.527 [2024-11-19 02:01:43.049429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.527 [2024-11-19 02:01:43.049462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.527 [2024-11-19 02:01:43.049491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.527 [2024-11-19 02:01:43.063681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.527 [2024-11-19 02:01:43.063714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.527 [2024-11-19 02:01:43.063742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.527 [2024-11-19 02:01:43.077914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.527 [2024-11-19 02:01:43.078154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.527 [2024-11-19 02:01:43.078173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.527 [2024-11-19 02:01:43.094212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.527 [2024-11-19 02:01:43.094281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.527 [2024-11-19 02:01:43.094310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.527 [2024-11-19 02:01:43.110733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.527 [2024-11-19 02:01:43.110768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.527 [2024-11-19 02:01:43.110797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.527 [2024-11-19 02:01:43.125769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.527 [2024-11-19 02:01:43.125984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.527 [2024-11-19 02:01:43.126018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.527 [2024-11-19 02:01:43.141220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.527 [2024-11-19 02:01:43.141256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.527 [2024-11-19 02:01:43.141285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.787 [2024-11-19 02:01:43.157098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.787 [2024-11-19 02:01:43.157134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.787 [2024-11-19 02:01:43.157162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.787 [2024-11-19 02:01:43.172600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.787 [2024-11-19 02:01:43.172638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.787 [2024-11-19 02:01:43.172651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.787 [2024-11-19 02:01:43.189832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.787 [2024-11-19 02:01:43.189892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.787 [2024-11-19 02:01:43.189916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.787 [2024-11-19 02:01:43.207204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.787 [2024-11-19 02:01:43.207239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.787 [2024-11-19 02:01:43.207268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.787 [2024-11-19 02:01:43.223476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.787 [2024-11-19 02:01:43.223538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.787 [2024-11-19 02:01:43.223552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.787 [2024-11-19 02:01:43.238500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.787 [2024-11-19 02:01:43.238703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.787 [2024-11-19 02:01:43.238737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.787 [2024-11-19 02:01:43.253599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.787 [2024-11-19 02:01:43.253788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.787 [2024-11-19 02:01:43.253823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.787 [2024-11-19 02:01:43.268921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.787 [2024-11-19 02:01:43.268971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.787 [2024-11-19 02:01:43.268999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.787 [2024-11-19 02:01:43.283249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.787 [2024-11-19 02:01:43.283283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.787 [2024-11-19 02:01:43.283311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.787 [2024-11-19 02:01:43.297416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.787 [2024-11-19 02:01:43.297449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.787 [2024-11-19 02:01:43.297478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.787 [2024-11-19 02:01:43.312028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.787 [2024-11-19 02:01:43.312060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.787 [2024-11-19 02:01:43.312088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.787 [2024-11-19 02:01:43.326207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.787 [2024-11-19 02:01:43.326394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.787 [2024-11-19 02:01:43.326428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.787 [2024-11-19 02:01:43.340566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.787 [2024-11-19 02:01:43.340765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.787 [2024-11-19 02:01:43.340782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.787 [2024-11-19 02:01:43.354890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.787 [2024-11-19 02:01:43.354924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.787 [2024-11-19 02:01:43.354953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.787 [2024-11-19 02:01:43.368995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.787 [2024-11-19 02:01:43.369029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.787 [2024-11-19 02:01:43.369057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.787 [2024-11-19 02:01:43.383027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.787 [2024-11-19 02:01:43.383060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.787 [2024-11-19 02:01:43.383088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:32.787 [2024-11-19 02:01:43.397086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:32.787 [2024-11-19 02:01:43.397119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.787 [2024-11-19 02:01:43.397147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.047 [2024-11-19 02:01:43.412656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.047 [2024-11-19 02:01:43.412688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.047 [2024-11-19 02:01:43.412716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.047 [2024-11-19 02:01:43.426926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.047 [2024-11-19 02:01:43.426959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.047 [2024-11-19 02:01:43.426988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.047 [2024-11-19 02:01:43.447111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.047 [2024-11-19 02:01:43.447144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.047 [2024-11-19 02:01:43.447172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.047 [2024-11-19 02:01:43.461132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.047 [2024-11-19 02:01:43.461166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.047 [2024-11-19 02:01:43.461194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.047 [2024-11-19 02:01:43.475343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.047 [2024-11-19 02:01:43.475377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.047 [2024-11-19 02:01:43.475405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.047 [2024-11-19 02:01:43.489478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.047 [2024-11-19 02:01:43.489538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.047 [2024-11-19 02:01:43.489567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.047 17332.00 IOPS, 67.70 MiB/s [2024-11-19T02:01:43.662Z] [2024-11-19 02:01:43.503799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.047 [2024-11-19 02:01:43.503833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.047 [2024-11-19 02:01:43.503862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.047 [2024-11-19 02:01:43.517960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.047 [2024-11-19 02:01:43.518155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.047 [2024-11-19 02:01:43.518189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.047 [2024-11-19 02:01:43.532723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.047 [2024-11-19 02:01:43.532759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.047 [2024-11-19 02:01:43.532788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.047 [2024-11-19 02:01:43.547078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.047 [2024-11-19 02:01:43.547112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.047 [2024-11-19 02:01:43.547140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.047 [2024-11-19 02:01:43.561176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.047 [2024-11-19 02:01:43.561211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.047 [2024-11-19 02:01:43.561239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.047 [2024-11-19 02:01:43.575478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.047 [2024-11-19 02:01:43.575567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.047 [2024-11-19 02:01:43.575598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.047 [2024-11-19 02:01:43.590015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.047 [2024-11-19 02:01:43.590197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.047 [2024-11-19 02:01:43.590215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.047 [2024-11-19 02:01:43.604367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.047 [2024-11-19 02:01:43.604581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.048 [2024-11-19 02:01:43.604615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.048 [2024-11-19 02:01:43.618942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.048 [2024-11-19 02:01:43.618976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.048 [2024-11-19 02:01:43.619004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.048 [2024-11-19 02:01:43.633022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.048 [2024-11-19 02:01:43.633055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.048 [2024-11-19 02:01:43.633084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.048 [2024-11-19 02:01:43.647293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.048 [2024-11-19 02:01:43.647326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.048 [2024-11-19 02:01:43.647354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.048 [2024-11-19 02:01:43.661689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.048 [2024-11-19 02:01:43.661908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.048 [2024-11-19 02:01:43.661927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.307 [2024-11-19 02:01:43.677057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.307 [2024-11-19 02:01:43.677091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.307 [2024-11-19 02:01:43.677120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.307 [2024-11-19 02:01:43.691378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.307 [2024-11-19 02:01:43.691412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.307 [2024-11-19 02:01:43.691441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.307 [2024-11-19 02:01:43.705542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.307 [2024-11-19 02:01:43.705574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.307 [2024-11-19 02:01:43.705602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.307 [2024-11-19 02:01:43.719820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.307 [2024-11-19 02:01:43.720041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.307 [2024-11-19 02:01:43.720059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.307 [2024-11-19 02:01:43.734356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.307 [2024-11-19 02:01:43.734583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.307 [2024-11-19 02:01:43.734604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.307 [2024-11-19 02:01:43.748837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.307 [2024-11-19 02:01:43.748872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.307 [2024-11-19 02:01:43.748900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.307 [2024-11-19 02:01:43.762898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.307 [2024-11-19 02:01:43.762933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.307 [2024-11-19 02:01:43.762961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.307 [2024-11-19 02:01:43.776973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.307 [2024-11-19 02:01:43.777007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.307 [2024-11-19 02:01:43.777036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.307 [2024-11-19 02:01:43.791165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.307 [2024-11-19 02:01:43.791199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.307 [2024-11-19 02:01:43.791228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.308 [2024-11-19 02:01:43.805275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.308 [2024-11-19 02:01:43.805308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.308 [2024-11-19 02:01:43.805336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.308 [2024-11-19 02:01:43.819399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.308 [2024-11-19 02:01:43.819433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.308 [2024-11-19 02:01:43.819461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.308 [2024-11-19 02:01:43.833482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.308 [2024-11-19 02:01:43.833544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.308 [2024-11-19 02:01:43.833573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.308 [2024-11-19 02:01:43.847567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.308 [2024-11-19 02:01:43.847600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.308 [2024-11-19 02:01:43.847628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.308 [2024-11-19 02:01:43.861433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.308 [2024-11-19 02:01:43.861467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.308 [2024-11-19 02:01:43.861495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.308 [2024-11-19 02:01:43.875722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.308 [2024-11-19 02:01:43.875758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.308 [2024-11-19 02:01:43.875787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.308 [2024-11-19 02:01:43.889861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.308 [2024-11-19 02:01:43.890074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.308 [2024-11-19 02:01:43.890092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.308 [2024-11-19 02:01:43.904325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.308 [2024-11-19 02:01:43.904558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.308 [2024-11-19 02:01:43.904577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.308 [2024-11-19 02:01:43.918848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.308 [2024-11-19 02:01:43.918899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.308 [2024-11-19 02:01:43.918928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.567 [2024-11-19 02:01:43.934525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.567 [2024-11-19 02:01:43.934586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.567 [2024-11-19 02:01:43.934617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.567 [2024-11-19 02:01:43.948679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.567 [2024-11-19 02:01:43.948712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.567 [2024-11-19 02:01:43.948740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.567 [2024-11-19 02:01:43.962692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.567 [2024-11-19 02:01:43.962727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.567 [2024-11-19 02:01:43.962756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.567 [2024-11-19 02:01:43.976677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.567 [2024-11-19 02:01:43.976709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.567 [2024-11-19 02:01:43.976738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.567 [2024-11-19 02:01:43.990744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.567 [2024-11-19 02:01:43.990791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.567 [2024-11-19 02:01:43.990820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.567 [2024-11-19 02:01:44.005602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.567 [2024-11-19 02:01:44.005635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.568 [2024-11-19 02:01:44.005664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.568 [2024-11-19 02:01:44.019991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.568 [2024-11-19 02:01:44.020196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.568 [2024-11-19 02:01:44.020214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.568 [2024-11-19 02:01:44.034815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.568 [2024-11-19 02:01:44.034849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.568 [2024-11-19 02:01:44.034878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.568 [2024-11-19 02:01:44.048850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.568 [2024-11-19 02:01:44.048883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.568 [2024-11-19 02:01:44.048911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.568 [2024-11-19 02:01:44.063053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.568 [2024-11-19 02:01:44.063086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.568 [2024-11-19 02:01:44.063114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.568 [2024-11-19 02:01:44.077099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.568 [2024-11-19 02:01:44.077131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.568 [2024-11-19 02:01:44.077159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.568 [2024-11-19 02:01:44.091250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.568 [2024-11-19 02:01:44.091283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.568 [2024-11-19 02:01:44.091311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.568 [2024-11-19 02:01:44.105434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.568 [2024-11-19 02:01:44.105470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.568 [2024-11-19 02:01:44.105499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.568 [2024-11-19 02:01:44.119603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.568 [2024-11-19 02:01:44.119637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.568 [2024-11-19 02:01:44.119665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.568 [2024-11-19 02:01:44.133551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.568 [2024-11-19 02:01:44.133582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.568 [2024-11-19 02:01:44.133611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.568 [2024-11-19 02:01:44.147713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.568 [2024-11-19 02:01:44.147746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:68 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.568 [2024-11-19 02:01:44.147775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.568 [2024-11-19 02:01:44.161573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.568 [2024-11-19 02:01:44.161605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.568 [2024-11-19 02:01:44.161634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.568 [2024-11-19 02:01:44.175621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.568 [2024-11-19 02:01:44.175655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.568 [2024-11-19 02:01:44.175683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.827 [2024-11-19 02:01:44.191104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.827 [2024-11-19 02:01:44.191137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.827 [2024-11-19 02:01:44.191165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.827 [2024-11-19 02:01:44.207419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.827 [2024-11-19 02:01:44.207453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.827 [2024-11-19 02:01:44.207483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.827 [2024-11-19 02:01:44.224405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.827 [2024-11-19 02:01:44.224439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.827 [2024-11-19 02:01:44.224467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.827 [2024-11-19 02:01:44.240173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.827 [2024-11-19 02:01:44.240205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.827 [2024-11-19 02:01:44.240234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.827 [2024-11-19 02:01:44.255049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.827 [2024-11-19 02:01:44.255081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.827 [2024-11-19 02:01:44.255112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.827 [2024-11-19 02:01:44.269876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.827 [2024-11-19 02:01:44.269921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.827 [2024-11-19 02:01:44.269985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.827 [2024-11-19 02:01:44.286290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.827 [2024-11-19 02:01:44.286482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.827 [2024-11-19 02:01:44.286539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.827 [2024-11-19 02:01:44.302138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.827 [2024-11-19 02:01:44.302176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.827 [2024-11-19 02:01:44.302205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.827 [2024-11-19 02:01:44.317190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.827 [2024-11-19 02:01:44.317224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.827 [2024-11-19 02:01:44.317253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.827 [2024-11-19 02:01:44.332170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.827 [2024-11-19 02:01:44.332358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.827 [2024-11-19 02:01:44.332393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.827 [2024-11-19 02:01:44.347276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.828 [2024-11-19 02:01:44.347311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.828 [2024-11-19 02:01:44.347339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.828 [2024-11-19 02:01:44.361986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.828 [2024-11-19 02:01:44.362196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.828 [2024-11-19 02:01:44.362214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.828 [2024-11-19 02:01:44.383394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.828 [2024-11-19 02:01:44.383430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.828 [2024-11-19 02:01:44.383459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.828 [2024-11-19 02:01:44.398189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.828 [2024-11-19 02:01:44.398224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.828 [2024-11-19 02:01:44.398254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.828 [2024-11-19 02:01:44.413077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.828 [2024-11-19 02:01:44.413112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.828 [2024-11-19 02:01:44.413140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.828 [2024-11-19 02:01:44.427974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.828 [2024-11-19 02:01:44.428009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.828 [2024-11-19 02:01:44.428037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:33.828 [2024-11-19 02:01:44.443454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:33.828 [2024-11-19 02:01:44.443530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.828 [2024-11-19 02:01:44.443560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.087 [2024-11-19 02:01:44.462666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:34.087 [2024-11-19 02:01:44.462868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.087 [2024-11-19 02:01:44.462887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.087 [2024-11-19 02:01:44.478644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:34.087 [2024-11-19 02:01:44.478680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.087 [2024-11-19 02:01:44.478711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.087 [2024-11-19 02:01:44.493009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:34.087 [2024-11-19 02:01:44.493043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.087 [2024-11-19 02:01:44.493072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.087 17268.00 IOPS, 67.45 MiB/s [2024-11-19T02:01:44.702Z] [2024-11-19 02:01:44.508460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13eeb10) 00:20:34.087 [2024-11-19 02:01:44.508494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.087 [2024-11-19 02:01:44.508546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:34.087 00:20:34.087 Latency(us) 00:20:34.087 [2024-11-19T02:01:44.702Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.087 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:34.087 nvme0n1 : 2.01 17314.83 67.64 0.00 0.00 7386.97 6702.55 27048.49 00:20:34.087 [2024-11-19T02:01:44.702Z] =================================================================================================================== 00:20:34.087 [2024-11-19T02:01:44.702Z] Total : 17314.83 67.64 0.00 0.00 7386.97 6702.55 27048.49 00:20:34.087 { 00:20:34.087 "results": [ 00:20:34.087 { 00:20:34.087 "job": "nvme0n1", 00:20:34.087 "core_mask": "0x2", 00:20:34.087 "workload": "randread", 00:20:34.087 "status": "finished", 00:20:34.087 "queue_depth": 128, 00:20:34.087 "io_size": 4096, 00:20:34.087 "runtime": 2.009318, 00:20:34.087 "iops": 17314.83020606992, 00:20:34.087 "mibps": 67.63605549246063, 00:20:34.087 "io_failed": 0, 00:20:34.087 "io_timeout": 0, 00:20:34.087 "avg_latency_us": 7386.974456194262, 00:20:34.087 "min_latency_us": 6702.545454545455, 00:20:34.087 "max_latency_us": 27048.494545454545 00:20:34.087 } 00:20:34.087 ], 00:20:34.087 "core_count": 1 00:20:34.087 } 00:20:34.087 02:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:34.087 02:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:34.087 02:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:34.087 02:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:34.087 | .driver_specific 00:20:34.087 | .nvme_error 00:20:34.087 | .status_code 00:20:34.087 | .command_transient_transport_error' 00:20:34.347 02:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 136 > 0 )) 00:20:34.347 02:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94809 00:20:34.347 02:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 94809 ']' 00:20:34.347 02:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 94809 00:20:34.347 02:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:34.347 02:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:34.347 02:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94809 00:20:34.347 killing process with pid 94809 00:20:34.347 Received shutdown signal, test time was about 2.000000 seconds 00:20:34.347 00:20:34.347 Latency(us) 00:20:34.347 [2024-11-19T02:01:44.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.347 [2024-11-19T02:01:44.962Z] =================================================================================================================== 00:20:34.347 [2024-11-19T02:01:44.962Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:34.347 02:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:34.347 02:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:34.347 02:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94809' 00:20:34.347 02:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 94809 00:20:34.347 02:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 94809 00:20:34.347 02:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:20:34.347 02:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:34.347 02:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:34.347 02:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:34.347 02:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:34.347 02:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94856 00:20:34.347 02:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94856 /var/tmp/bperf.sock 00:20:34.347 02:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:20:34.347 02:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 94856 ']' 00:20:34.347 02:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:34.347 02:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:34.606 02:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:34.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:34.606 02:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:34.606 02:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:34.606 [2024-11-19 02:01:45.004861] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:20:34.606 [2024-11-19 02:01:45.005130] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94856 ] 00:20:34.606 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:34.606 Zero copy mechanism will not be used. 00:20:34.606 [2024-11-19 02:01:45.145448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.606 [2024-11-19 02:01:45.164181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.606 [2024-11-19 02:01:45.191923] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:34.865 02:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:34.865 02:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:34.865 02:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:34.866 02:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:34.866 02:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:34.866 02:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.866 02:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:34.866 02:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.866 02:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:34.866 02:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:35.125 nvme0n1 00:20:35.384 02:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:35.384 02:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.384 02:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:35.384 02:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.385 02:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:35.385 02:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:35.385 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:35.385 Zero copy mechanism will not be used. 00:20:35.385 Running I/O for 2 seconds... 00:20:35.385 [2024-11-19 02:01:45.895304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.385 [2024-11-19 02:01:45.895553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.385 [2024-11-19 02:01:45.895705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.385 [2024-11-19 02:01:45.899801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.385 [2024-11-19 02:01:45.899839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.385 [2024-11-19 02:01:45.899869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.385 [2024-11-19 02:01:45.903685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.385 [2024-11-19 02:01:45.903720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.385 [2024-11-19 02:01:45.903749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.385 [2024-11-19 02:01:45.907535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.385 [2024-11-19 02:01:45.907570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.385 [2024-11-19 02:01:45.907599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.385 [2024-11-19 02:01:45.911380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.385 [2024-11-19 02:01:45.911415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.385 [2024-11-19 02:01:45.911444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.385 [2024-11-19 02:01:45.915366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.385 [2024-11-19 02:01:45.915401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.385 [2024-11-19 02:01:45.915430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.385 [2024-11-19 02:01:45.919287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.385 [2024-11-19 02:01:45.919322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.385 [2024-11-19 02:01:45.919351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.385 [2024-11-19 02:01:45.923328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.385 [2024-11-19 02:01:45.923363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.385 [2024-11-19 02:01:45.923392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.385 [2024-11-19 02:01:45.927289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.385 [2024-11-19 02:01:45.927325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.385 [2024-11-19 02:01:45.927354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.385 [2024-11-19 02:01:45.931263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.385 [2024-11-19 02:01:45.931299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.385 [2024-11-19 02:01:45.931328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.385 [2024-11-19 02:01:45.935228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.385 [2024-11-19 02:01:45.935263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.385 [2024-11-19 02:01:45.935291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.385 [2024-11-19 02:01:45.939159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.385 [2024-11-19 02:01:45.939194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.385 [2024-11-19 02:01:45.939222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.385 [2024-11-19 02:01:45.943121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.385 [2024-11-19 02:01:45.943155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.385 [2024-11-19 02:01:45.943183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.385 [2024-11-19 02:01:45.947179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.385 [2024-11-19 02:01:45.947213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.385 [2024-11-19 02:01:45.947243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.385 [2024-11-19 02:01:45.951209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.385 [2024-11-19 02:01:45.951243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.385 [2024-11-19 02:01:45.951272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.385 [2024-11-19 02:01:45.955201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.385 [2024-11-19 02:01:45.955235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.385 [2024-11-19 02:01:45.955264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.385 [2024-11-19 02:01:45.959141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.385 [2024-11-19 02:01:45.959176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.385 [2024-11-19 02:01:45.959204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.385 [2024-11-19 02:01:45.963166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.385 [2024-11-19 02:01:45.963202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.385 [2024-11-19 02:01:45.963231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.385 [2024-11-19 02:01:45.967173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.385 [2024-11-19 02:01:45.967208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.385 [2024-11-19 02:01:45.967237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.385 [2024-11-19 02:01:45.971116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.385 [2024-11-19 02:01:45.971150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.385 [2024-11-19 02:01:45.971179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.385 [2024-11-19 02:01:45.975043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.385 [2024-11-19 02:01:45.975077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.385 [2024-11-19 02:01:45.975106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.385 [2024-11-19 02:01:45.978942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.385 [2024-11-19 02:01:45.978976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.385 [2024-11-19 02:01:45.979005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.385 [2024-11-19 02:01:45.983386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.385 [2024-11-19 02:01:45.983422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.385 [2024-11-19 02:01:45.983452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.385 [2024-11-19 02:01:45.987490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.385 [2024-11-19 02:01:45.987551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.385 [2024-11-19 02:01:45.987581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.385 [2024-11-19 02:01:45.991458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.385 [2024-11-19 02:01:45.991492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.385 [2024-11-19 02:01:45.991531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.385 [2024-11-19 02:01:45.995319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.386 [2024-11-19 02:01:45.995352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.386 [2024-11-19 02:01:45.995381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.386 [2024-11-19 02:01:45.999660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.386 [2024-11-19 02:01:45.999696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.386 [2024-11-19 02:01:45.999741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.646 [2024-11-19 02:01:46.003881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.646 [2024-11-19 02:01:46.003915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.646 [2024-11-19 02:01:46.003944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.646 [2024-11-19 02:01:46.008075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.646 [2024-11-19 02:01:46.008110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.646 [2024-11-19 02:01:46.008138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.646 [2024-11-19 02:01:46.012227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.646 [2024-11-19 02:01:46.012261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.646 [2024-11-19 02:01:46.012289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.646 [2024-11-19 02:01:46.016231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.646 [2024-11-19 02:01:46.016265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.646 [2024-11-19 02:01:46.016293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.646 [2024-11-19 02:01:46.020134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.646 [2024-11-19 02:01:46.020168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.646 [2024-11-19 02:01:46.020197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.646 [2024-11-19 02:01:46.024166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.646 [2024-11-19 02:01:46.024200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.647 [2024-11-19 02:01:46.024229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.647 [2024-11-19 02:01:46.028112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.647 [2024-11-19 02:01:46.028146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.647 [2024-11-19 02:01:46.028174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.647 [2024-11-19 02:01:46.032227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.647 [2024-11-19 02:01:46.032262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.647 [2024-11-19 02:01:46.032292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.647 [2024-11-19 02:01:46.036231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.647 [2024-11-19 02:01:46.036266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.647 [2024-11-19 02:01:46.036295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.647 [2024-11-19 02:01:46.040139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.647 [2024-11-19 02:01:46.040173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.647 [2024-11-19 02:01:46.040202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.647 [2024-11-19 02:01:46.044118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.647 [2024-11-19 02:01:46.044152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.647 [2024-11-19 02:01:46.044181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.647 [2024-11-19 02:01:46.048121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.647 [2024-11-19 02:01:46.048156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.647 [2024-11-19 02:01:46.048185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.647 [2024-11-19 02:01:46.052066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.647 [2024-11-19 02:01:46.052100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.647 [2024-11-19 02:01:46.052128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.647 [2024-11-19 02:01:46.056035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.647 [2024-11-19 02:01:46.056069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.647 [2024-11-19 02:01:46.056098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.647 [2024-11-19 02:01:46.059948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.647 [2024-11-19 02:01:46.059982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.647 [2024-11-19 02:01:46.060011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.647 [2024-11-19 02:01:46.063795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.647 [2024-11-19 02:01:46.063828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.647 [2024-11-19 02:01:46.063857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.647 [2024-11-19 02:01:46.067660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.647 [2024-11-19 02:01:46.067693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.647 [2024-11-19 02:01:46.067721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.647 [2024-11-19 02:01:46.071569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.647 [2024-11-19 02:01:46.071601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.647 [2024-11-19 02:01:46.071629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.647 [2024-11-19 02:01:46.075468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.647 [2024-11-19 02:01:46.075545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.647 [2024-11-19 02:01:46.075559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.647 [2024-11-19 02:01:46.079440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.647 [2024-11-19 02:01:46.079474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.647 [2024-11-19 02:01:46.079502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.647 [2024-11-19 02:01:46.083318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.647 [2024-11-19 02:01:46.083352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.647 [2024-11-19 02:01:46.083381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.647 [2024-11-19 02:01:46.087281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.647 [2024-11-19 02:01:46.087317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.647 [2024-11-19 02:01:46.087346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.647 [2024-11-19 02:01:46.091366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.647 [2024-11-19 02:01:46.091401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.647 [2024-11-19 02:01:46.091429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.647 [2024-11-19 02:01:46.095445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.647 [2024-11-19 02:01:46.095480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.647 [2024-11-19 02:01:46.095509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.647 [2024-11-19 02:01:46.099378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.647 [2024-11-19 02:01:46.099412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.647 [2024-11-19 02:01:46.099441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.647 [2024-11-19 02:01:46.103382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.647 [2024-11-19 02:01:46.103416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.647 [2024-11-19 02:01:46.103445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.647 [2024-11-19 02:01:46.107453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.647 [2024-11-19 02:01:46.107487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.647 [2024-11-19 02:01:46.107546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.647 [2024-11-19 02:01:46.111436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.647 [2024-11-19 02:01:46.111471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.647 [2024-11-19 02:01:46.111499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.647 [2024-11-19 02:01:46.115361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.647 [2024-11-19 02:01:46.115396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.647 [2024-11-19 02:01:46.115424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.647 [2024-11-19 02:01:46.119393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.647 [2024-11-19 02:01:46.119427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.647 [2024-11-19 02:01:46.119456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.647 [2024-11-19 02:01:46.123495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.647 [2024-11-19 02:01:46.123539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.647 [2024-11-19 02:01:46.123568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.647 [2024-11-19 02:01:46.127333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.647 [2024-11-19 02:01:46.127367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.647 [2024-11-19 02:01:46.127397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.647 [2024-11-19 02:01:46.131410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.647 [2024-11-19 02:01:46.131446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.648 [2024-11-19 02:01:46.131474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.648 [2024-11-19 02:01:46.135354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.648 [2024-11-19 02:01:46.135389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.648 [2024-11-19 02:01:46.135417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.648 [2024-11-19 02:01:46.139371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.648 [2024-11-19 02:01:46.139406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.648 [2024-11-19 02:01:46.139434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.648 [2024-11-19 02:01:46.143432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.648 [2024-11-19 02:01:46.143467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.648 [2024-11-19 02:01:46.143496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.648 [2024-11-19 02:01:46.147337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.648 [2024-11-19 02:01:46.147372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.648 [2024-11-19 02:01:46.147400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.648 [2024-11-19 02:01:46.151393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.648 [2024-11-19 02:01:46.151428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.648 [2024-11-19 02:01:46.151457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.648 [2024-11-19 02:01:46.155413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.648 [2024-11-19 02:01:46.155448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.648 [2024-11-19 02:01:46.155476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.648 [2024-11-19 02:01:46.159340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.648 [2024-11-19 02:01:46.159373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.648 [2024-11-19 02:01:46.159402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.648 [2024-11-19 02:01:46.163439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.648 [2024-11-19 02:01:46.163473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.648 [2024-11-19 02:01:46.163502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.648 [2024-11-19 02:01:46.167409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.648 [2024-11-19 02:01:46.167443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.648 [2024-11-19 02:01:46.167472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.648 [2024-11-19 02:01:46.171366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.648 [2024-11-19 02:01:46.171401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.648 [2024-11-19 02:01:46.171431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.648 [2024-11-19 02:01:46.175418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.648 [2024-11-19 02:01:46.175452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.648 [2024-11-19 02:01:46.175480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.648 [2024-11-19 02:01:46.179383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.648 [2024-11-19 02:01:46.179418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.648 [2024-11-19 02:01:46.179447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.648 [2024-11-19 02:01:46.183424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.648 [2024-11-19 02:01:46.183458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.648 [2024-11-19 02:01:46.183486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.648 [2024-11-19 02:01:46.187385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.648 [2024-11-19 02:01:46.187419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.648 [2024-11-19 02:01:46.187448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.648 [2024-11-19 02:01:46.191403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.648 [2024-11-19 02:01:46.191437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.648 [2024-11-19 02:01:46.191466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.648 [2024-11-19 02:01:46.195355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.648 [2024-11-19 02:01:46.195389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.648 [2024-11-19 02:01:46.195417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.648 [2024-11-19 02:01:46.199360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.648 [2024-11-19 02:01:46.199394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.648 [2024-11-19 02:01:46.199423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.648 [2024-11-19 02:01:46.203325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.648 [2024-11-19 02:01:46.203358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.648 [2024-11-19 02:01:46.203387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.648 [2024-11-19 02:01:46.207278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.648 [2024-11-19 02:01:46.207312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.648 [2024-11-19 02:01:46.207341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.648 [2024-11-19 02:01:46.211218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.648 [2024-11-19 02:01:46.211252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.648 [2024-11-19 02:01:46.211280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.648 [2024-11-19 02:01:46.215192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.648 [2024-11-19 02:01:46.215226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.648 [2024-11-19 02:01:46.215255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.648 [2024-11-19 02:01:46.219139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.648 [2024-11-19 02:01:46.219173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.648 [2024-11-19 02:01:46.219202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.648 [2024-11-19 02:01:46.223156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.648 [2024-11-19 02:01:46.223190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.648 [2024-11-19 02:01:46.223219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.648 [2024-11-19 02:01:46.227271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.648 [2024-11-19 02:01:46.227306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.648 [2024-11-19 02:01:46.227335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.648 [2024-11-19 02:01:46.231276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.648 [2024-11-19 02:01:46.231311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.648 [2024-11-19 02:01:46.231339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.648 [2024-11-19 02:01:46.235288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.648 [2024-11-19 02:01:46.235322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.648 [2024-11-19 02:01:46.235351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.649 [2024-11-19 02:01:46.239415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.649 [2024-11-19 02:01:46.239449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.649 [2024-11-19 02:01:46.239477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.649 [2024-11-19 02:01:46.243587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.649 [2024-11-19 02:01:46.243649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.649 [2024-11-19 02:01:46.243662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.649 [2024-11-19 02:01:46.247943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.649 [2024-11-19 02:01:46.247978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.649 [2024-11-19 02:01:46.248006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.649 [2024-11-19 02:01:46.252229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.649 [2024-11-19 02:01:46.252264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.649 [2024-11-19 02:01:46.252292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.649 [2024-11-19 02:01:46.256686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.649 [2024-11-19 02:01:46.256723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.649 [2024-11-19 02:01:46.256736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.649 [2024-11-19 02:01:46.261719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.649 [2024-11-19 02:01:46.261758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.649 [2024-11-19 02:01:46.261772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.909 [2024-11-19 02:01:46.266575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.909 [2024-11-19 02:01:46.266644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.909 [2024-11-19 02:01:46.266658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.909 [2024-11-19 02:01:46.271306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.909 [2024-11-19 02:01:46.271509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.909 [2024-11-19 02:01:46.271537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.909 [2024-11-19 02:01:46.275924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.909 [2024-11-19 02:01:46.275959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.909 [2024-11-19 02:01:46.275987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.909 [2024-11-19 02:01:46.280138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.909 [2024-11-19 02:01:46.280172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.909 [2024-11-19 02:01:46.280201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.909 [2024-11-19 02:01:46.284337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.909 [2024-11-19 02:01:46.284371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.909 [2024-11-19 02:01:46.284400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.909 [2024-11-19 02:01:46.288293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.909 [2024-11-19 02:01:46.288329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.910 [2024-11-19 02:01:46.288358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.910 [2024-11-19 02:01:46.292265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.910 [2024-11-19 02:01:46.292298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.910 [2024-11-19 02:01:46.292327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.910 [2024-11-19 02:01:46.296279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.910 [2024-11-19 02:01:46.296314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.910 [2024-11-19 02:01:46.296342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.910 [2024-11-19 02:01:46.300188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.910 [2024-11-19 02:01:46.300222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.910 [2024-11-19 02:01:46.300251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.910 [2024-11-19 02:01:46.304160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.910 [2024-11-19 02:01:46.304195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.910 [2024-11-19 02:01:46.304223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.910 [2024-11-19 02:01:46.308307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.910 [2024-11-19 02:01:46.308342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.910 [2024-11-19 02:01:46.308372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.910 [2024-11-19 02:01:46.312392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.910 [2024-11-19 02:01:46.312426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.910 [2024-11-19 02:01:46.312454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.910 [2024-11-19 02:01:46.316408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.910 [2024-11-19 02:01:46.316443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.910 [2024-11-19 02:01:46.316472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.910 [2024-11-19 02:01:46.320349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.910 [2024-11-19 02:01:46.320382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.910 [2024-11-19 02:01:46.320411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.910 [2024-11-19 02:01:46.324339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.910 [2024-11-19 02:01:46.324373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.910 [2024-11-19 02:01:46.324401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.910 [2024-11-19 02:01:46.328405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.910 [2024-11-19 02:01:46.328440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.910 [2024-11-19 02:01:46.328468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.910 [2024-11-19 02:01:46.332403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.910 [2024-11-19 02:01:46.332437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.910 [2024-11-19 02:01:46.332466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.910 [2024-11-19 02:01:46.336337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.910 [2024-11-19 02:01:46.336371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.910 [2024-11-19 02:01:46.336400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.910 [2024-11-19 02:01:46.340306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.910 [2024-11-19 02:01:46.340340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.910 [2024-11-19 02:01:46.340369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.910 [2024-11-19 02:01:46.344281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.910 [2024-11-19 02:01:46.344315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.910 [2024-11-19 02:01:46.344344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.910 [2024-11-19 02:01:46.348265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.910 [2024-11-19 02:01:46.348300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.910 [2024-11-19 02:01:46.348328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.910 [2024-11-19 02:01:46.352329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.910 [2024-11-19 02:01:46.352363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.910 [2024-11-19 02:01:46.352392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.910 [2024-11-19 02:01:46.356369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.910 [2024-11-19 02:01:46.356404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.910 [2024-11-19 02:01:46.356432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.910 [2024-11-19 02:01:46.360317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.910 [2024-11-19 02:01:46.360351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.910 [2024-11-19 02:01:46.360380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.910 [2024-11-19 02:01:46.364374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.910 [2024-11-19 02:01:46.364408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.910 [2024-11-19 02:01:46.364437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.910 [2024-11-19 02:01:46.368352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.910 [2024-11-19 02:01:46.368386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.910 [2024-11-19 02:01:46.368415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.910 [2024-11-19 02:01:46.372300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.910 [2024-11-19 02:01:46.372334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.910 [2024-11-19 02:01:46.372363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.910 [2024-11-19 02:01:46.376314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.910 [2024-11-19 02:01:46.376349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.910 [2024-11-19 02:01:46.376378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.910 [2024-11-19 02:01:46.380266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.910 [2024-11-19 02:01:46.380300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.910 [2024-11-19 02:01:46.380329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.910 [2024-11-19 02:01:46.384308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.910 [2024-11-19 02:01:46.384345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.910 [2024-11-19 02:01:46.384373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.910 [2024-11-19 02:01:46.388363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.910 [2024-11-19 02:01:46.388416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.910 [2024-11-19 02:01:46.388445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.910 [2024-11-19 02:01:46.392391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.910 [2024-11-19 02:01:46.392426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.910 [2024-11-19 02:01:46.392455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.910 [2024-11-19 02:01:46.396411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.911 [2024-11-19 02:01:46.396446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.911 [2024-11-19 02:01:46.396475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.911 [2024-11-19 02:01:46.400313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.911 [2024-11-19 02:01:46.400348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.911 [2024-11-19 02:01:46.400377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.911 [2024-11-19 02:01:46.404300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.911 [2024-11-19 02:01:46.404334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.911 [2024-11-19 02:01:46.404363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.911 [2024-11-19 02:01:46.408285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.911 [2024-11-19 02:01:46.408320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.911 [2024-11-19 02:01:46.408348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.911 [2024-11-19 02:01:46.412222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.911 [2024-11-19 02:01:46.412257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.911 [2024-11-19 02:01:46.412285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.911 [2024-11-19 02:01:46.416203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.911 [2024-11-19 02:01:46.416237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.911 [2024-11-19 02:01:46.416265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.911 [2024-11-19 02:01:46.420185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.911 [2024-11-19 02:01:46.420221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.911 [2024-11-19 02:01:46.420250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.911 [2024-11-19 02:01:46.424164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.911 [2024-11-19 02:01:46.424199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.911 [2024-11-19 02:01:46.424227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.911 [2024-11-19 02:01:46.428182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.911 [2024-11-19 02:01:46.428216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.911 [2024-11-19 02:01:46.428245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.911 [2024-11-19 02:01:46.432187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.911 [2024-11-19 02:01:46.432222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.911 [2024-11-19 02:01:46.432251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.911 [2024-11-19 02:01:46.436212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.911 [2024-11-19 02:01:46.436246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.911 [2024-11-19 02:01:46.436275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.911 [2024-11-19 02:01:46.440189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.911 [2024-11-19 02:01:46.440224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.911 [2024-11-19 02:01:46.440253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.911 [2024-11-19 02:01:46.444113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.911 [2024-11-19 02:01:46.444147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.911 [2024-11-19 02:01:46.444176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.911 [2024-11-19 02:01:46.448115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.911 [2024-11-19 02:01:46.448150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.911 [2024-11-19 02:01:46.448179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.911 [2024-11-19 02:01:46.452013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.911 [2024-11-19 02:01:46.452046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.911 [2024-11-19 02:01:46.452075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.911 [2024-11-19 02:01:46.455989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.911 [2024-11-19 02:01:46.456023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.911 [2024-11-19 02:01:46.456052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.911 [2024-11-19 02:01:46.459862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.911 [2024-11-19 02:01:46.459896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.911 [2024-11-19 02:01:46.459925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.911 [2024-11-19 02:01:46.463723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.911 [2024-11-19 02:01:46.463756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.911 [2024-11-19 02:01:46.463784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.911 [2024-11-19 02:01:46.467677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.911 [2024-11-19 02:01:46.467710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.911 [2024-11-19 02:01:46.467739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.911 [2024-11-19 02:01:46.471766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.911 [2024-11-19 02:01:46.471800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.911 [2024-11-19 02:01:46.471829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.911 [2024-11-19 02:01:46.475739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.911 [2024-11-19 02:01:46.475773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.911 [2024-11-19 02:01:46.475803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.911 [2024-11-19 02:01:46.479749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.911 [2024-11-19 02:01:46.479782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.911 [2024-11-19 02:01:46.479811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.911 [2024-11-19 02:01:46.483662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.911 [2024-11-19 02:01:46.483695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.911 [2024-11-19 02:01:46.483724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.911 [2024-11-19 02:01:46.487581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.911 [2024-11-19 02:01:46.487614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.911 [2024-11-19 02:01:46.487642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.911 [2024-11-19 02:01:46.491525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.911 [2024-11-19 02:01:46.491557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.911 [2024-11-19 02:01:46.491585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.911 [2024-11-19 02:01:46.495405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.911 [2024-11-19 02:01:46.495636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.911 [2024-11-19 02:01:46.495654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.911 [2024-11-19 02:01:46.499703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.911 [2024-11-19 02:01:46.499788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.911 [2024-11-19 02:01:46.499929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.912 [2024-11-19 02:01:46.504025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.912 [2024-11-19 02:01:46.504244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.912 [2024-11-19 02:01:46.504369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:35.912 [2024-11-19 02:01:46.508424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.912 [2024-11-19 02:01:46.508674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.912 [2024-11-19 02:01:46.508855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:35.912 [2024-11-19 02:01:46.512845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.912 [2024-11-19 02:01:46.513054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.912 [2024-11-19 02:01:46.513173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:35.912 [2024-11-19 02:01:46.517198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.912 [2024-11-19 02:01:46.517410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.912 [2024-11-19 02:01:46.517604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:35.912 [2024-11-19 02:01:46.521721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:35.912 [2024-11-19 02:01:46.521981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.912 [2024-11-19 02:01:46.522183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.171 [2024-11-19 02:01:46.526704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.171 [2024-11-19 02:01:46.526957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.171 [2024-11-19 02:01:46.527158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.171 [2024-11-19 02:01:46.531324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.171 [2024-11-19 02:01:46.531553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.171 [2024-11-19 02:01:46.531804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.171 [2024-11-19 02:01:46.536029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.171 [2024-11-19 02:01:46.536238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.171 [2024-11-19 02:01:46.536361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.171 [2024-11-19 02:01:46.540331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.171 [2024-11-19 02:01:46.540575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.171 [2024-11-19 02:01:46.540705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.171 [2024-11-19 02:01:46.544730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.171 [2024-11-19 02:01:46.544768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.171 [2024-11-19 02:01:46.544797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.171 [2024-11-19 02:01:46.548605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.171 [2024-11-19 02:01:46.548641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.171 [2024-11-19 02:01:46.548670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.171 [2024-11-19 02:01:46.552534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.171 [2024-11-19 02:01:46.552568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.171 [2024-11-19 02:01:46.552597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.171 [2024-11-19 02:01:46.556356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.171 [2024-11-19 02:01:46.556390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.171 [2024-11-19 02:01:46.556419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.171 [2024-11-19 02:01:46.560278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.171 [2024-11-19 02:01:46.560313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.171 [2024-11-19 02:01:46.560342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.171 [2024-11-19 02:01:46.564263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.171 [2024-11-19 02:01:46.564297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.171 [2024-11-19 02:01:46.564326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.171 [2024-11-19 02:01:46.568163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.171 [2024-11-19 02:01:46.568197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.171 [2024-11-19 02:01:46.568225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.171 [2024-11-19 02:01:46.572074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.171 [2024-11-19 02:01:46.572108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.171 [2024-11-19 02:01:46.572137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.171 [2024-11-19 02:01:46.575985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.171 [2024-11-19 02:01:46.576018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.171 [2024-11-19 02:01:46.576048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.171 [2024-11-19 02:01:46.579867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.171 [2024-11-19 02:01:46.579901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.171 [2024-11-19 02:01:46.579930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.171 [2024-11-19 02:01:46.583737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.171 [2024-11-19 02:01:46.583771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.171 [2024-11-19 02:01:46.583799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.171 [2024-11-19 02:01:46.587490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.171 [2024-11-19 02:01:46.587531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.171 [2024-11-19 02:01:46.587560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.171 [2024-11-19 02:01:46.591388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.171 [2024-11-19 02:01:46.591617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.171 [2024-11-19 02:01:46.591635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.171 [2024-11-19 02:01:46.595514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.171 [2024-11-19 02:01:46.595547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.171 [2024-11-19 02:01:46.595577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.171 [2024-11-19 02:01:46.599334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.171 [2024-11-19 02:01:46.599527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.171 [2024-11-19 02:01:46.599544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.171 [2024-11-19 02:01:46.603508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.171 [2024-11-19 02:01:46.603541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.171 [2024-11-19 02:01:46.603570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.171 [2024-11-19 02:01:46.607338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.171 [2024-11-19 02:01:46.607564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.171 [2024-11-19 02:01:46.607582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.171 [2024-11-19 02:01:46.611469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.171 [2024-11-19 02:01:46.611717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.171 [2024-11-19 02:01:46.611736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.171 [2024-11-19 02:01:46.615597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.171 [2024-11-19 02:01:46.615631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.171 [2024-11-19 02:01:46.615660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.171 [2024-11-19 02:01:46.619415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.171 [2024-11-19 02:01:46.619645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.171 [2024-11-19 02:01:46.619663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.171 [2024-11-19 02:01:46.623614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.171 [2024-11-19 02:01:46.623648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.171 [2024-11-19 02:01:46.623677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.171 [2024-11-19 02:01:46.627425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.171 [2024-11-19 02:01:46.627638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.171 [2024-11-19 02:01:46.627674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.171 [2024-11-19 02:01:46.631590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.631625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.631654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.635414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.635642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.635660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.639532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.639566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.639595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.643385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.643584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.643618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.647465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.647699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.647717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.651606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.651640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.651669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.655412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.655623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.655657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.659548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.659582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.659611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.663367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.663578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.663612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.667526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.667560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.667589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.671343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.671570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.671589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.675511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.675545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.675574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.679334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.679560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.679579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.683585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.683620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.683649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.687806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.687843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.687887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.691982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.692018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.692046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.696199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.696235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.696264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.700807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.700860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.700906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.705251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.705290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.705304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.709747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.709786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.709817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.714173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.714212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.714258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.718595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.718660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.718691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.722867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.722901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.722929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.727012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.727047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.727076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.731278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.731315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.731360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.735445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.735480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.735509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.739486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.739530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.739559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.743481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.743526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.743555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.747436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.747471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.747500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.751705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.751755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.751784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.755743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.755778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.755808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.759752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.759787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.759815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.763837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.763873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.763901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.767788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.767823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.767852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.771960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.771996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.772024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.775877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.775913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.775942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.779872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.779907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.779936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.172 [2024-11-19 02:01:46.783957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.172 [2024-11-19 02:01:46.783995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.172 [2024-11-19 02:01:46.784025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.432 [2024-11-19 02:01:46.788380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.432 [2024-11-19 02:01:46.788450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.432 [2024-11-19 02:01:46.788481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.432 [2024-11-19 02:01:46.792831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.432 [2024-11-19 02:01:46.792884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.432 [2024-11-19 02:01:46.792929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.432 [2024-11-19 02:01:46.796907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.432 [2024-11-19 02:01:46.796943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.432 [2024-11-19 02:01:46.796972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.432 [2024-11-19 02:01:46.801037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.432 [2024-11-19 02:01:46.801072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.432 [2024-11-19 02:01:46.801101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.432 [2024-11-19 02:01:46.805025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.432 [2024-11-19 02:01:46.805061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.432 [2024-11-19 02:01:46.805090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.432 [2024-11-19 02:01:46.809227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.432 [2024-11-19 02:01:46.809263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.432 [2024-11-19 02:01:46.809292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.432 [2024-11-19 02:01:46.813214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.432 [2024-11-19 02:01:46.813250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.432 [2024-11-19 02:01:46.813279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.432 [2024-11-19 02:01:46.817156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.432 [2024-11-19 02:01:46.817191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.432 [2024-11-19 02:01:46.817220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.432 [2024-11-19 02:01:46.821159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.432 [2024-11-19 02:01:46.821195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.432 [2024-11-19 02:01:46.821223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.432 [2024-11-19 02:01:46.825141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.432 [2024-11-19 02:01:46.825176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.432 [2024-11-19 02:01:46.825205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.432 [2024-11-19 02:01:46.829203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.432 [2024-11-19 02:01:46.829239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.432 [2024-11-19 02:01:46.829267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.433 [2024-11-19 02:01:46.833202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.433 [2024-11-19 02:01:46.833237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.433 [2024-11-19 02:01:46.833266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.433 [2024-11-19 02:01:46.837233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.433 [2024-11-19 02:01:46.837268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.433 [2024-11-19 02:01:46.837297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.433 [2024-11-19 02:01:46.841174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.433 [2024-11-19 02:01:46.841210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.433 [2024-11-19 02:01:46.841239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.433 [2024-11-19 02:01:46.845132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.433 [2024-11-19 02:01:46.845167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.433 [2024-11-19 02:01:46.845197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.433 [2024-11-19 02:01:46.849236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.433 [2024-11-19 02:01:46.849273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.433 [2024-11-19 02:01:46.849302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.433 [2024-11-19 02:01:46.853296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.433 [2024-11-19 02:01:46.853332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.433 [2024-11-19 02:01:46.853361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.433 [2024-11-19 02:01:46.857221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.433 [2024-11-19 02:01:46.857257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.433 [2024-11-19 02:01:46.857285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.433 [2024-11-19 02:01:46.861164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.433 [2024-11-19 02:01:46.861199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.433 [2024-11-19 02:01:46.861228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.433 [2024-11-19 02:01:46.865304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.433 [2024-11-19 02:01:46.865339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.433 [2024-11-19 02:01:46.865368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.433 [2024-11-19 02:01:46.869312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.433 [2024-11-19 02:01:46.869349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.433 [2024-11-19 02:01:46.869378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.433 [2024-11-19 02:01:46.873400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.433 [2024-11-19 02:01:46.873436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.433 [2024-11-19 02:01:46.873465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.433 [2024-11-19 02:01:46.877442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.433 [2024-11-19 02:01:46.877479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.433 [2024-11-19 02:01:46.877509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.433 [2024-11-19 02:01:46.881672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.433 [2024-11-19 02:01:46.881706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.433 [2024-11-19 02:01:46.881736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.433 [2024-11-19 02:01:46.885623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.433 [2024-11-19 02:01:46.885658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.433 [2024-11-19 02:01:46.885686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.433 7579.00 IOPS, 947.38 MiB/s [2024-11-19T02:01:47.048Z] [2024-11-19 02:01:46.889718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.433 [2024-11-19 02:01:46.889754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.433 [2024-11-19 02:01:46.889784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.433 [2024-11-19 02:01:46.892711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.433 [2024-11-19 02:01:46.892747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.433 [2024-11-19 02:01:46.892777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.433 [2024-11-19 02:01:46.895585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.433 [2024-11-19 02:01:46.895622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.433 [2024-11-19 02:01:46.895651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.433 [2024-11-19 02:01:46.899052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.433 [2024-11-19 02:01:46.899087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.433 [2024-11-19 02:01:46.899116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.433 [2024-11-19 02:01:46.901773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.433 [2024-11-19 02:01:46.901809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.433 [2024-11-19 02:01:46.901839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.433 [2024-11-19 02:01:46.905394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.433 [2024-11-19 02:01:46.905428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.433 [2024-11-19 02:01:46.905457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.433 [2024-11-19 02:01:46.908411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.433 [2024-11-19 02:01:46.908611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.433 [2024-11-19 02:01:46.908645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.433 [2024-11-19 02:01:46.911350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.433 [2024-11-19 02:01:46.911380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.433 [2024-11-19 02:01:46.911409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.433 [2024-11-19 02:01:46.914125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.433 [2024-11-19 02:01:46.914162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.433 [2024-11-19 02:01:46.914192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.433 [2024-11-19 02:01:46.917622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.433 [2024-11-19 02:01:46.917656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.433 [2024-11-19 02:01:46.917686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.433 [2024-11-19 02:01:46.920100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.433 [2024-11-19 02:01:46.920284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.433 [2024-11-19 02:01:46.920317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.433 [2024-11-19 02:01:46.923944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.433 [2024-11-19 02:01:46.924143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.433 [2024-11-19 02:01:46.924161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.433 [2024-11-19 02:01:46.926825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.433 [2024-11-19 02:01:46.926860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.434 [2024-11-19 02:01:46.926889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.434 [2024-11-19 02:01:46.930459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.434 [2024-11-19 02:01:46.930495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.434 [2024-11-19 02:01:46.930534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.434 [2024-11-19 02:01:46.933046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.434 [2024-11-19 02:01:46.933081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.434 [2024-11-19 02:01:46.933110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.434 [2024-11-19 02:01:46.936588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.434 [2024-11-19 02:01:46.936622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.434 [2024-11-19 02:01:46.936651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.434 [2024-11-19 02:01:46.939467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.434 [2024-11-19 02:01:46.939703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.434 [2024-11-19 02:01:46.939721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.434 [2024-11-19 02:01:46.942459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.434 [2024-11-19 02:01:46.942489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.434 [2024-11-19 02:01:46.942528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.434 [2024-11-19 02:01:46.945487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.434 [2024-11-19 02:01:46.945531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.434 [2024-11-19 02:01:46.945561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.434 [2024-11-19 02:01:46.948132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.434 [2024-11-19 02:01:46.948167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.434 [2024-11-19 02:01:46.948196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.434 [2024-11-19 02:01:46.951776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.434 [2024-11-19 02:01:46.951812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.434 [2024-11-19 02:01:46.951857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.434 [2024-11-19 02:01:46.954484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.434 [2024-11-19 02:01:46.954545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.434 [2024-11-19 02:01:46.954576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.434 [2024-11-19 02:01:46.957925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.434 [2024-11-19 02:01:46.958002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.434 [2024-11-19 02:01:46.958033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.434 [2024-11-19 02:01:46.961073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.434 [2024-11-19 02:01:46.961108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.434 [2024-11-19 02:01:46.961137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.434 [2024-11-19 02:01:46.964117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.434 [2024-11-19 02:01:46.964152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.434 [2024-11-19 02:01:46.964181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.434 [2024-11-19 02:01:46.967831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.434 [2024-11-19 02:01:46.967868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.434 [2024-11-19 02:01:46.967898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.434 [2024-11-19 02:01:46.970700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.434 [2024-11-19 02:01:46.970736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.434 [2024-11-19 02:01:46.970765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.434 [2024-11-19 02:01:46.974100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.434 [2024-11-19 02:01:46.974138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.434 [2024-11-19 02:01:46.974168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.434 [2024-11-19 02:01:46.976598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.434 [2024-11-19 02:01:46.976630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.434 [2024-11-19 02:01:46.976659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.434 [2024-11-19 02:01:46.979943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.434 [2024-11-19 02:01:46.979977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.434 [2024-11-19 02:01:46.980006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.434 [2024-11-19 02:01:46.982743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.434 [2024-11-19 02:01:46.982777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.434 [2024-11-19 02:01:46.982806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.434 [2024-11-19 02:01:46.986120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.434 [2024-11-19 02:01:46.986316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.434 [2024-11-19 02:01:46.986364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.434 [2024-11-19 02:01:46.989124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.434 [2024-11-19 02:01:46.989176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.434 [2024-11-19 02:01:46.989205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.434 [2024-11-19 02:01:46.992028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.434 [2024-11-19 02:01:46.992062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.434 [2024-11-19 02:01:46.992091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.434 [2024-11-19 02:01:46.995219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.434 [2024-11-19 02:01:46.995254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.434 [2024-11-19 02:01:46.995284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.434 [2024-11-19 02:01:46.998413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.434 [2024-11-19 02:01:46.998447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.434 [2024-11-19 02:01:46.998476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.434 [2024-11-19 02:01:47.001350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.434 [2024-11-19 02:01:47.001558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.434 [2024-11-19 02:01:47.001577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.434 [2024-11-19 02:01:47.004332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.434 [2024-11-19 02:01:47.004362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.434 [2024-11-19 02:01:47.004391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.434 [2024-11-19 02:01:47.007720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.434 [2024-11-19 02:01:47.007756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.434 [2024-11-19 02:01:47.007785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.435 [2024-11-19 02:01:47.010824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.435 [2024-11-19 02:01:47.010875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.435 [2024-11-19 02:01:47.010904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.435 [2024-11-19 02:01:47.014085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.435 [2024-11-19 02:01:47.014123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.435 [2024-11-19 02:01:47.014137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.435 [2024-11-19 02:01:47.017195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.435 [2024-11-19 02:01:47.017361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.435 [2024-11-19 02:01:47.017394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.435 [2024-11-19 02:01:47.020687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.435 [2024-11-19 02:01:47.020723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.435 [2024-11-19 02:01:47.020753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.435 [2024-11-19 02:01:47.023787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.435 [2024-11-19 02:01:47.023822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.435 [2024-11-19 02:01:47.023852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.435 [2024-11-19 02:01:47.026638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.435 [2024-11-19 02:01:47.026674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.435 [2024-11-19 02:01:47.026703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.435 [2024-11-19 02:01:47.029605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.435 [2024-11-19 02:01:47.029639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.435 [2024-11-19 02:01:47.029668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.435 [2024-11-19 02:01:47.032335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.435 [2024-11-19 02:01:47.032370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.435 [2024-11-19 02:01:47.032400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.435 [2024-11-19 02:01:47.035341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.435 [2024-11-19 02:01:47.035376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.435 [2024-11-19 02:01:47.035405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.435 [2024-11-19 02:01:47.038361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.435 [2024-11-19 02:01:47.038395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.435 [2024-11-19 02:01:47.038424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.435 [2024-11-19 02:01:47.041306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.435 [2024-11-19 02:01:47.041487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.435 [2024-11-19 02:01:47.041555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.435 [2024-11-19 02:01:47.044756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.435 [2024-11-19 02:01:47.044794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.435 [2024-11-19 02:01:47.044825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.695 [2024-11-19 02:01:47.048146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.695 [2024-11-19 02:01:47.048183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.695 [2024-11-19 02:01:47.048228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.695 [2024-11-19 02:01:47.051484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.695 [2024-11-19 02:01:47.051548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.695 [2024-11-19 02:01:47.051577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.695 [2024-11-19 02:01:47.054529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.695 [2024-11-19 02:01:47.054608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.695 [2024-11-19 02:01:47.054638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.695 [2024-11-19 02:01:47.057886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.695 [2024-11-19 02:01:47.057920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.695 [2024-11-19 02:01:47.057990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.695 [2024-11-19 02:01:47.060819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.695 [2024-11-19 02:01:47.060855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.695 [2024-11-19 02:01:47.060886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.695 [2024-11-19 02:01:47.063627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.695 [2024-11-19 02:01:47.063660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.695 [2024-11-19 02:01:47.063689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.695 [2024-11-19 02:01:47.066891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.695 [2024-11-19 02:01:47.066926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.695 [2024-11-19 02:01:47.066955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.695 [2024-11-19 02:01:47.069992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.695 [2024-11-19 02:01:47.070031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.695 [2024-11-19 02:01:47.070061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.695 [2024-11-19 02:01:47.072779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.695 [2024-11-19 02:01:47.072814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.695 [2024-11-19 02:01:47.072844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.695 [2024-11-19 02:01:47.075836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.695 [2024-11-19 02:01:47.075870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.696 [2024-11-19 02:01:47.075899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.696 [2024-11-19 02:01:47.079003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.696 [2024-11-19 02:01:47.079037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.696 [2024-11-19 02:01:47.079066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.696 [2024-11-19 02:01:47.081816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.696 [2024-11-19 02:01:47.081850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.696 [2024-11-19 02:01:47.081879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.696 [2024-11-19 02:01:47.084731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.696 [2024-11-19 02:01:47.084766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.696 [2024-11-19 02:01:47.084794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.696 [2024-11-19 02:01:47.087941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.696 [2024-11-19 02:01:47.087975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.696 [2024-11-19 02:01:47.088005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.696 [2024-11-19 02:01:47.090821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.696 [2024-11-19 02:01:47.090856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.696 [2024-11-19 02:01:47.090885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.696 [2024-11-19 02:01:47.093808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.696 [2024-11-19 02:01:47.093841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.696 [2024-11-19 02:01:47.093870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.696 [2024-11-19 02:01:47.096540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.696 [2024-11-19 02:01:47.096574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.696 [2024-11-19 02:01:47.096603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.696 [2024-11-19 02:01:47.099343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.696 [2024-11-19 02:01:47.099378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.696 [2024-11-19 02:01:47.099407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.696 [2024-11-19 02:01:47.102786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.696 [2024-11-19 02:01:47.102820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.696 [2024-11-19 02:01:47.102849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.696 [2024-11-19 02:01:47.105559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.696 [2024-11-19 02:01:47.105594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.696 [2024-11-19 02:01:47.105623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.696 [2024-11-19 02:01:47.108360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.696 [2024-11-19 02:01:47.108394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.696 [2024-11-19 02:01:47.108423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.696 [2024-11-19 02:01:47.111793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.696 [2024-11-19 02:01:47.111828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.696 [2024-11-19 02:01:47.111858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.696 [2024-11-19 02:01:47.114820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.696 [2024-11-19 02:01:47.114855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.696 [2024-11-19 02:01:47.114884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.696 [2024-11-19 02:01:47.117593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.696 [2024-11-19 02:01:47.117628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.696 [2024-11-19 02:01:47.117657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.696 [2024-11-19 02:01:47.120519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.696 [2024-11-19 02:01:47.120553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.696 [2024-11-19 02:01:47.120582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.696 [2024-11-19 02:01:47.123401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.696 [2024-11-19 02:01:47.123437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.696 [2024-11-19 02:01:47.123467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.696 [2024-11-19 02:01:47.126561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.696 [2024-11-19 02:01:47.126641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.696 [2024-11-19 02:01:47.126656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.696 [2024-11-19 02:01:47.129489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.696 [2024-11-19 02:01:47.129549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.696 [2024-11-19 02:01:47.129578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.696 [2024-11-19 02:01:47.132229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.696 [2024-11-19 02:01:47.132264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.696 [2024-11-19 02:01:47.132293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.696 [2024-11-19 02:01:47.135259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.696 [2024-11-19 02:01:47.135294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.696 [2024-11-19 02:01:47.135323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.696 [2024-11-19 02:01:47.137870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.696 [2024-11-19 02:01:47.137918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.696 [2024-11-19 02:01:47.137988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.696 [2024-11-19 02:01:47.140815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.696 [2024-11-19 02:01:47.140849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.696 [2024-11-19 02:01:47.140878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.696 [2024-11-19 02:01:47.143812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.696 [2024-11-19 02:01:47.143847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.696 [2024-11-19 02:01:47.143877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.696 [2024-11-19 02:01:47.146225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.696 [2024-11-19 02:01:47.146275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.696 [2024-11-19 02:01:47.146304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.696 [2024-11-19 02:01:47.149089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.696 [2024-11-19 02:01:47.149274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.696 [2024-11-19 02:01:47.149309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.696 [2024-11-19 02:01:47.152573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.696 [2024-11-19 02:01:47.152632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.696 [2024-11-19 02:01:47.152661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.697 [2024-11-19 02:01:47.155513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.697 [2024-11-19 02:01:47.155546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.697 [2024-11-19 02:01:47.155576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.697 [2024-11-19 02:01:47.158512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.697 [2024-11-19 02:01:47.158575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.697 [2024-11-19 02:01:47.158605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.697 [2024-11-19 02:01:47.161381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.697 [2024-11-19 02:01:47.161416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.697 [2024-11-19 02:01:47.161445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.697 [2024-11-19 02:01:47.164403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.697 [2024-11-19 02:01:47.164635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.697 [2024-11-19 02:01:47.164653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.697 [2024-11-19 02:01:47.168071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.697 [2024-11-19 02:01:47.168255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.697 [2024-11-19 02:01:47.168288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.697 [2024-11-19 02:01:47.170890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.697 [2024-11-19 02:01:47.170924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.697 [2024-11-19 02:01:47.170954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.697 [2024-11-19 02:01:47.174734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.697 [2024-11-19 02:01:47.174785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.697 [2024-11-19 02:01:47.174814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.697 [2024-11-19 02:01:47.177172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.697 [2024-11-19 02:01:47.177206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.697 [2024-11-19 02:01:47.177235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.697 [2024-11-19 02:01:47.180524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.697 [2024-11-19 02:01:47.180558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.697 [2024-11-19 02:01:47.180587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.697 [2024-11-19 02:01:47.183134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.697 [2024-11-19 02:01:47.183168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.697 [2024-11-19 02:01:47.183197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.697 [2024-11-19 02:01:47.186616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.697 [2024-11-19 02:01:47.186650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.697 [2024-11-19 02:01:47.186679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.697 [2024-11-19 02:01:47.189295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.697 [2024-11-19 02:01:47.189330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.697 [2024-11-19 02:01:47.189359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.697 [2024-11-19 02:01:47.192796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.697 [2024-11-19 02:01:47.192830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.697 [2024-11-19 02:01:47.192859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.697 [2024-11-19 02:01:47.195645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.697 [2024-11-19 02:01:47.195677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.697 [2024-11-19 02:01:47.195706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.697 [2024-11-19 02:01:47.198395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.697 [2024-11-19 02:01:47.198626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.697 [2024-11-19 02:01:47.198644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.697 [2024-11-19 02:01:47.202151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.697 [2024-11-19 02:01:47.202372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.697 [2024-11-19 02:01:47.202405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.697 [2024-11-19 02:01:47.204976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.697 [2024-11-19 02:01:47.205011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.697 [2024-11-19 02:01:47.205040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.697 [2024-11-19 02:01:47.208391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.697 [2024-11-19 02:01:47.208426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.697 [2024-11-19 02:01:47.208456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.697 [2024-11-19 02:01:47.211040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.697 [2024-11-19 02:01:47.211074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.697 [2024-11-19 02:01:47.211103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.697 [2024-11-19 02:01:47.214840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.697 [2024-11-19 02:01:47.214878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.697 [2024-11-19 02:01:47.214922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.697 [2024-11-19 02:01:47.217390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.697 [2024-11-19 02:01:47.217424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.697 [2024-11-19 02:01:47.217454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.697 [2024-11-19 02:01:47.220958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.697 [2024-11-19 02:01:47.220993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.697 [2024-11-19 02:01:47.221023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.697 [2024-11-19 02:01:47.223830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.697 [2024-11-19 02:01:47.223864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.697 [2024-11-19 02:01:47.223894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.697 [2024-11-19 02:01:47.226560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.697 [2024-11-19 02:01:47.226798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.697 [2024-11-19 02:01:47.226816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.697 [2024-11-19 02:01:47.230384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.697 [2024-11-19 02:01:47.230576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.697 [2024-11-19 02:01:47.230610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.697 [2024-11-19 02:01:47.232997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.697 [2024-11-19 02:01:47.233027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.697 [2024-11-19 02:01:47.233055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.697 [2024-11-19 02:01:47.236566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.697 [2024-11-19 02:01:47.236601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.697 [2024-11-19 02:01:47.236630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.698 [2024-11-19 02:01:47.239102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.698 [2024-11-19 02:01:47.239136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.698 [2024-11-19 02:01:47.239165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.698 [2024-11-19 02:01:47.242720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.698 [2024-11-19 02:01:47.242755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.698 [2024-11-19 02:01:47.242784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.698 [2024-11-19 02:01:47.245053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.698 [2024-11-19 02:01:47.245086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.698 [2024-11-19 02:01:47.245115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.698 [2024-11-19 02:01:47.248817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.698 [2024-11-19 02:01:47.248851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.698 [2024-11-19 02:01:47.248880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.698 [2024-11-19 02:01:47.252679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.698 [2024-11-19 02:01:47.252714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.698 [2024-11-19 02:01:47.252743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.698 [2024-11-19 02:01:47.256588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.698 [2024-11-19 02:01:47.256621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.698 [2024-11-19 02:01:47.256650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.698 [2024-11-19 02:01:47.260418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.698 [2024-11-19 02:01:47.260453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.698 [2024-11-19 02:01:47.260482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.698 [2024-11-19 02:01:47.264379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.698 [2024-11-19 02:01:47.264413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.698 [2024-11-19 02:01:47.264442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.698 [2024-11-19 02:01:47.268537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.698 [2024-11-19 02:01:47.268595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.698 [2024-11-19 02:01:47.268609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.698 [2024-11-19 02:01:47.272813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.698 [2024-11-19 02:01:47.272850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.698 [2024-11-19 02:01:47.272894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.698 [2024-11-19 02:01:47.277146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.698 [2024-11-19 02:01:47.277182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.698 [2024-11-19 02:01:47.277211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.698 [2024-11-19 02:01:47.281480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.698 [2024-11-19 02:01:47.281556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.698 [2024-11-19 02:01:47.281571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.698 [2024-11-19 02:01:47.286133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.698 [2024-11-19 02:01:47.286322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.698 [2024-11-19 02:01:47.286355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.698 [2024-11-19 02:01:47.290962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.698 [2024-11-19 02:01:47.291013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.698 [2024-11-19 02:01:47.291043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.698 [2024-11-19 02:01:47.295285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.698 [2024-11-19 02:01:47.295319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.698 [2024-11-19 02:01:47.295348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.698 [2024-11-19 02:01:47.299733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.698 [2024-11-19 02:01:47.299770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.698 [2024-11-19 02:01:47.299783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.698 [2024-11-19 02:01:47.304003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.698 [2024-11-19 02:01:47.304037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.698 [2024-11-19 02:01:47.304067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.698 [2024-11-19 02:01:47.308360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.698 [2024-11-19 02:01:47.308394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.698 [2024-11-19 02:01:47.308424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.958 [2024-11-19 02:01:47.313114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.958 [2024-11-19 02:01:47.313301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.958 [2024-11-19 02:01:47.313335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.958 [2024-11-19 02:01:47.317266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.958 [2024-11-19 02:01:47.317296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.958 [2024-11-19 02:01:47.317341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.958 [2024-11-19 02:01:47.321641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.958 [2024-11-19 02:01:47.321832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.958 [2024-11-19 02:01:47.322108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.958 [2024-11-19 02:01:47.326101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.958 [2024-11-19 02:01:47.326300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.958 [2024-11-19 02:01:47.326464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.958 [2024-11-19 02:01:47.330652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.958 [2024-11-19 02:01:47.330858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.958 [2024-11-19 02:01:47.330983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.958 [2024-11-19 02:01:47.335039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.958 [2024-11-19 02:01:47.335228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.958 [2024-11-19 02:01:47.335423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.958 [2024-11-19 02:01:47.339424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.958 [2024-11-19 02:01:47.339657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.958 [2024-11-19 02:01:47.339800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.958 [2024-11-19 02:01:47.343751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.958 [2024-11-19 02:01:47.343971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.958 [2024-11-19 02:01:47.344102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.958 [2024-11-19 02:01:47.348092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.958 [2024-11-19 02:01:47.348312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.958 [2024-11-19 02:01:47.348576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.958 [2024-11-19 02:01:47.352570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.958 [2024-11-19 02:01:47.352777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.958 [2024-11-19 02:01:47.352921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.958 [2024-11-19 02:01:47.356771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.958 [2024-11-19 02:01:47.356989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.958 [2024-11-19 02:01:47.357124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.958 [2024-11-19 02:01:47.360997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.958 [2024-11-19 02:01:47.361204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.958 [2024-11-19 02:01:47.361328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.958 [2024-11-19 02:01:47.365285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.959 [2024-11-19 02:01:47.365499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.959 [2024-11-19 02:01:47.365553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.959 [2024-11-19 02:01:47.369436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.959 [2024-11-19 02:01:47.369644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.959 [2024-11-19 02:01:47.369785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.959 [2024-11-19 02:01:47.373786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.959 [2024-11-19 02:01:47.373989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.959 [2024-11-19 02:01:47.374188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.959 [2024-11-19 02:01:47.378274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.959 [2024-11-19 02:01:47.378472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.959 [2024-11-19 02:01:47.378833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.959 [2024-11-19 02:01:47.382896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.959 [2024-11-19 02:01:47.383114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.959 [2024-11-19 02:01:47.383250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.959 [2024-11-19 02:01:47.387129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.959 [2024-11-19 02:01:47.387321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.959 [2024-11-19 02:01:47.387456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.959 [2024-11-19 02:01:47.391472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.959 [2024-11-19 02:01:47.391729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.959 [2024-11-19 02:01:47.391980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.959 [2024-11-19 02:01:47.395915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.959 [2024-11-19 02:01:47.396133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.959 [2024-11-19 02:01:47.396270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.959 [2024-11-19 02:01:47.400161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.959 [2024-11-19 02:01:47.400198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.959 [2024-11-19 02:01:47.400228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.959 [2024-11-19 02:01:47.404197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.959 [2024-11-19 02:01:47.404233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.959 [2024-11-19 02:01:47.404263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.959 [2024-11-19 02:01:47.408139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.959 [2024-11-19 02:01:47.408174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.959 [2024-11-19 02:01:47.408204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.959 [2024-11-19 02:01:47.412105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.959 [2024-11-19 02:01:47.412141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.959 [2024-11-19 02:01:47.412170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.959 [2024-11-19 02:01:47.416036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.959 [2024-11-19 02:01:47.416072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.959 [2024-11-19 02:01:47.416100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.959 [2024-11-19 02:01:47.419904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.959 [2024-11-19 02:01:47.419938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.959 [2024-11-19 02:01:47.419967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.959 [2024-11-19 02:01:47.423867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.959 [2024-11-19 02:01:47.423902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.959 [2024-11-19 02:01:47.423931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.959 [2024-11-19 02:01:47.427778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.959 [2024-11-19 02:01:47.427813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.959 [2024-11-19 02:01:47.427842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.959 [2024-11-19 02:01:47.431668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.959 [2024-11-19 02:01:47.431703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.959 [2024-11-19 02:01:47.431732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.959 [2024-11-19 02:01:47.435569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.959 [2024-11-19 02:01:47.435604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.959 [2024-11-19 02:01:47.435632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.959 [2024-11-19 02:01:47.439472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.959 [2024-11-19 02:01:47.439696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.959 [2024-11-19 02:01:47.439729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.959 [2024-11-19 02:01:47.443572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.959 [2024-11-19 02:01:47.443606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.959 [2024-11-19 02:01:47.443635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.959 [2024-11-19 02:01:47.447441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.959 [2024-11-19 02:01:47.447650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.959 [2024-11-19 02:01:47.447668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.959 [2024-11-19 02:01:47.451859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.959 [2024-11-19 02:01:47.451894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.959 [2024-11-19 02:01:47.451923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.959 [2024-11-19 02:01:47.456034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.959 [2024-11-19 02:01:47.456072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.959 [2024-11-19 02:01:47.456085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.959 [2024-11-19 02:01:47.460082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.959 [2024-11-19 02:01:47.460119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.959 [2024-11-19 02:01:47.460148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.959 [2024-11-19 02:01:47.464106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.959 [2024-11-19 02:01:47.464141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.959 [2024-11-19 02:01:47.464169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.959 [2024-11-19 02:01:47.468067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.959 [2024-11-19 02:01:47.468101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.959 [2024-11-19 02:01:47.468131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.959 [2024-11-19 02:01:47.471967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.959 [2024-11-19 02:01:47.472001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.959 [2024-11-19 02:01:47.472030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.959 [2024-11-19 02:01:47.475882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.959 [2024-11-19 02:01:47.475916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.959 [2024-11-19 02:01:47.475945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.959 [2024-11-19 02:01:47.479808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.959 [2024-11-19 02:01:47.479843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.959 [2024-11-19 02:01:47.479872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.959 [2024-11-19 02:01:47.483765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.959 [2024-11-19 02:01:47.483800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.959 [2024-11-19 02:01:47.483828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.959 [2024-11-19 02:01:47.487724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.959 [2024-11-19 02:01:47.487760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.960 [2024-11-19 02:01:47.487789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.960 [2024-11-19 02:01:47.491730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.960 [2024-11-19 02:01:47.491764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.960 [2024-11-19 02:01:47.491793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.960 [2024-11-19 02:01:47.495673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.960 [2024-11-19 02:01:47.495708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.960 [2024-11-19 02:01:47.495737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.960 [2024-11-19 02:01:47.499591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.960 [2024-11-19 02:01:47.499625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.960 [2024-11-19 02:01:47.499654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.960 [2024-11-19 02:01:47.503562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.960 [2024-11-19 02:01:47.503596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.960 [2024-11-19 02:01:47.503624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.960 [2024-11-19 02:01:47.507458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.960 [2024-11-19 02:01:47.507690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.960 [2024-11-19 02:01:47.507725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.960 [2024-11-19 02:01:47.511736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.960 [2024-11-19 02:01:47.511772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.960 [2024-11-19 02:01:47.511802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.960 [2024-11-19 02:01:47.515713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.960 [2024-11-19 02:01:47.515747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.960 [2024-11-19 02:01:47.515777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.960 [2024-11-19 02:01:47.519665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.960 [2024-11-19 02:01:47.519699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.960 [2024-11-19 02:01:47.519728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.960 [2024-11-19 02:01:47.523636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.960 [2024-11-19 02:01:47.523670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.960 [2024-11-19 02:01:47.523699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.960 [2024-11-19 02:01:47.527555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.960 [2024-11-19 02:01:47.527588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.960 [2024-11-19 02:01:47.527617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.960 [2024-11-19 02:01:47.531534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.960 [2024-11-19 02:01:47.531568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.960 [2024-11-19 02:01:47.531597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.960 [2024-11-19 02:01:47.535506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.960 [2024-11-19 02:01:47.535724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.960 [2024-11-19 02:01:47.535759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.960 [2024-11-19 02:01:47.539648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.960 [2024-11-19 02:01:47.539683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.960 [2024-11-19 02:01:47.539713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.960 [2024-11-19 02:01:47.543579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.960 [2024-11-19 02:01:47.543613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.960 [2024-11-19 02:01:47.543642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.960 [2024-11-19 02:01:47.547377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.960 [2024-11-19 02:01:47.547608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.960 [2024-11-19 02:01:47.547625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.960 [2024-11-19 02:01:47.551491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.960 [2024-11-19 02:01:47.551536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.960 [2024-11-19 02:01:47.551565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.960 [2024-11-19 02:01:47.555463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.960 [2024-11-19 02:01:47.555675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.960 [2024-11-19 02:01:47.555692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:36.960 [2024-11-19 02:01:47.559538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.960 [2024-11-19 02:01:47.559572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.960 [2024-11-19 02:01:47.559601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:36.960 [2024-11-19 02:01:47.563476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.960 [2024-11-19 02:01:47.563686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.960 [2024-11-19 02:01:47.563704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:36.960 [2024-11-19 02:01:47.567445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.960 [2024-11-19 02:01:47.567662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.960 [2024-11-19 02:01:47.567700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:36.960 [2024-11-19 02:01:47.571778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:36.960 [2024-11-19 02:01:47.571815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.960 [2024-11-19 02:01:47.571845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:37.230 [2024-11-19 02:01:47.576189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.230 [2024-11-19 02:01:47.576225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.230 [2024-11-19 02:01:47.576255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:37.230 [2024-11-19 02:01:47.580369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.230 [2024-11-19 02:01:47.580417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.230 [2024-11-19 02:01:47.580447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:37.230 [2024-11-19 02:01:47.584388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.230 [2024-11-19 02:01:47.584423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.230 [2024-11-19 02:01:47.584452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:37.230 [2024-11-19 02:01:47.588345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.230 [2024-11-19 02:01:47.588379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.230 [2024-11-19 02:01:47.588409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:37.230 [2024-11-19 02:01:47.592312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.230 [2024-11-19 02:01:47.592347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.230 [2024-11-19 02:01:47.592376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:37.230 [2024-11-19 02:01:47.596316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.230 [2024-11-19 02:01:47.596351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.230 [2024-11-19 02:01:47.596380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:37.230 [2024-11-19 02:01:47.600164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.230 [2024-11-19 02:01:47.600199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.230 [2024-11-19 02:01:47.600228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:37.230 [2024-11-19 02:01:47.604154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.230 [2024-11-19 02:01:47.604188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.230 [2024-11-19 02:01:47.604217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:37.230 [2024-11-19 02:01:47.608143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.230 [2024-11-19 02:01:47.608177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.230 [2024-11-19 02:01:47.608207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:37.230 [2024-11-19 02:01:47.612106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.230 [2024-11-19 02:01:47.612141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.230 [2024-11-19 02:01:47.612170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:37.230 [2024-11-19 02:01:47.616010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.230 [2024-11-19 02:01:47.616045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.230 [2024-11-19 02:01:47.616075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:37.230 [2024-11-19 02:01:47.619949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.230 [2024-11-19 02:01:47.619984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.230 [2024-11-19 02:01:47.620013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:37.230 [2024-11-19 02:01:47.623900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.230 [2024-11-19 02:01:47.623935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.230 [2024-11-19 02:01:47.623964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:37.230 [2024-11-19 02:01:47.627745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.230 [2024-11-19 02:01:47.627780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.230 [2024-11-19 02:01:47.627809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:37.230 [2024-11-19 02:01:47.631702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.230 [2024-11-19 02:01:47.631737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.230 [2024-11-19 02:01:47.631766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:37.230 [2024-11-19 02:01:47.635693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.230 [2024-11-19 02:01:47.635727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.231 [2024-11-19 02:01:47.635756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:37.231 [2024-11-19 02:01:47.639656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.231 [2024-11-19 02:01:47.639690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.231 [2024-11-19 02:01:47.639719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:37.231 [2024-11-19 02:01:47.643609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.231 [2024-11-19 02:01:47.643643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.231 [2024-11-19 02:01:47.643687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:37.231 [2024-11-19 02:01:47.647555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.231 [2024-11-19 02:01:47.647592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.231 [2024-11-19 02:01:47.647621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:37.231 [2024-11-19 02:01:47.651403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.231 [2024-11-19 02:01:47.651619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.231 [2024-11-19 02:01:47.651652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:37.231 [2024-11-19 02:01:47.655576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.231 [2024-11-19 02:01:47.655610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.231 [2024-11-19 02:01:47.655639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:37.231 [2024-11-19 02:01:47.659471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.231 [2024-11-19 02:01:47.659666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.231 [2024-11-19 02:01:47.659699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:37.231 [2024-11-19 02:01:47.663593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.231 [2024-11-19 02:01:47.663627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.231 [2024-11-19 02:01:47.663656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:37.231 [2024-11-19 02:01:47.667546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.231 [2024-11-19 02:01:47.667579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.231 [2024-11-19 02:01:47.667609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:37.231 [2024-11-19 02:01:47.671495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.231 [2024-11-19 02:01:47.671539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.231 [2024-11-19 02:01:47.671569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:37.231 [2024-11-19 02:01:47.675371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.231 [2024-11-19 02:01:47.675562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.231 [2024-11-19 02:01:47.675597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:37.231 [2024-11-19 02:01:47.679478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.231 [2024-11-19 02:01:47.679685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.231 [2024-11-19 02:01:47.679703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:37.231 [2024-11-19 02:01:47.683634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.231 [2024-11-19 02:01:47.683668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.231 [2024-11-19 02:01:47.683697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:37.231 [2024-11-19 02:01:47.687550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.231 [2024-11-19 02:01:47.687584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.231 [2024-11-19 02:01:47.687614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:37.231 [2024-11-19 02:01:47.691407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.231 [2024-11-19 02:01:47.691635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.231 [2024-11-19 02:01:47.691653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:37.231 [2024-11-19 02:01:47.695490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.231 [2024-11-19 02:01:47.695534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.231 [2024-11-19 02:01:47.695564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:37.231 [2024-11-19 02:01:47.699414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.231 [2024-11-19 02:01:47.699623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.231 [2024-11-19 02:01:47.699640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:37.231 [2024-11-19 02:01:47.703582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.231 [2024-11-19 02:01:47.703632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.231 [2024-11-19 02:01:47.703662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:37.231 [2024-11-19 02:01:47.707541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.231 [2024-11-19 02:01:47.707576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.231 [2024-11-19 02:01:47.707605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:37.231 [2024-11-19 02:01:47.711534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.231 [2024-11-19 02:01:47.711568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.231 [2024-11-19 02:01:47.711597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:37.231 [2024-11-19 02:01:47.715470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.231 [2024-11-19 02:01:47.715669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.231 [2024-11-19 02:01:47.715702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:37.231 [2024-11-19 02:01:47.719644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.231 [2024-11-19 02:01:47.719678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.231 [2024-11-19 02:01:47.719708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:37.231 [2024-11-19 02:01:47.723598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.231 [2024-11-19 02:01:47.723632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.231 [2024-11-19 02:01:47.723662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:37.231 [2024-11-19 02:01:47.727492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.231 [2024-11-19 02:01:47.727535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.231 [2024-11-19 02:01:47.727563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:37.231 [2024-11-19 02:01:47.731423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.231 [2024-11-19 02:01:47.731652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.231 [2024-11-19 02:01:47.731670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:37.231 [2024-11-19 02:01:47.735706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.231 [2024-11-19 02:01:47.735742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.231 [2024-11-19 02:01:47.735771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:37.231 [2024-11-19 02:01:47.739609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.231 [2024-11-19 02:01:47.739644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.231 [2024-11-19 02:01:47.739673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:37.231 [2024-11-19 02:01:47.743474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.232 [2024-11-19 02:01:47.743707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.232 [2024-11-19 02:01:47.743724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:37.232 [2024-11-19 02:01:47.747567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.232 [2024-11-19 02:01:47.747601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.232 [2024-11-19 02:01:47.747630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:37.232 [2024-11-19 02:01:47.751405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.232 [2024-11-19 02:01:47.751600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.232 [2024-11-19 02:01:47.751634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:37.232 [2024-11-19 02:01:47.755566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.232 [2024-11-19 02:01:47.755600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.232 [2024-11-19 02:01:47.755629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:37.232 [2024-11-19 02:01:47.759438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.232 [2024-11-19 02:01:47.759629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.232 [2024-11-19 02:01:47.759663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:37.232 [2024-11-19 02:01:47.763562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.232 [2024-11-19 02:01:47.763596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.232 [2024-11-19 02:01:47.763626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:37.232 [2024-11-19 02:01:47.767431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.232 [2024-11-19 02:01:47.767625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.232 [2024-11-19 02:01:47.767658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:37.232 [2024-11-19 02:01:47.771411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.232 [2024-11-19 02:01:47.771603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.232 [2024-11-19 02:01:47.771636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:37.232 [2024-11-19 02:01:47.775487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.232 [2024-11-19 02:01:47.775531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.232 [2024-11-19 02:01:47.775561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:37.232 [2024-11-19 02:01:47.779384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.232 [2024-11-19 02:01:47.779597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.232 [2024-11-19 02:01:47.779637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:37.232 [2024-11-19 02:01:47.783435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.232 [2024-11-19 02:01:47.783663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.232 [2024-11-19 02:01:47.783681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:37.232 [2024-11-19 02:01:47.787523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.232 [2024-11-19 02:01:47.787558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.232 [2024-11-19 02:01:47.787587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:37.232 [2024-11-19 02:01:47.791460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.232 [2024-11-19 02:01:47.791696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.232 [2024-11-19 02:01:47.791714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:37.232 [2024-11-19 02:01:47.795617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.232 [2024-11-19 02:01:47.795651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.232 [2024-11-19 02:01:47.795681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:37.232 [2024-11-19 02:01:47.799432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.232 [2024-11-19 02:01:47.799624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.232 [2024-11-19 02:01:47.799658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:37.232 [2024-11-19 02:01:47.803550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.232 [2024-11-19 02:01:47.803584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.232 [2024-11-19 02:01:47.803614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:37.232 [2024-11-19 02:01:47.807516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.232 [2024-11-19 02:01:47.807549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.232 [2024-11-19 02:01:47.807578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:37.232 [2024-11-19 02:01:47.811413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.232 [2024-11-19 02:01:47.811626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.232 [2024-11-19 02:01:47.811660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:37.232 [2024-11-19 02:01:47.815649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.232 [2024-11-19 02:01:47.815683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.232 [2024-11-19 02:01:47.815713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:37.232 [2024-11-19 02:01:47.819529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.232 [2024-11-19 02:01:47.819563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.232 [2024-11-19 02:01:47.819593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:37.232 [2024-11-19 02:01:47.823334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.232 [2024-11-19 02:01:47.823567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.232 [2024-11-19 02:01:47.823586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:37.232 [2024-11-19 02:01:47.827386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.232 [2024-11-19 02:01:47.827578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.232 [2024-11-19 02:01:47.827611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:37.232 [2024-11-19 02:01:47.831468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.232 [2024-11-19 02:01:47.831646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.232 [2024-11-19 02:01:47.831680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:37.232 [2024-11-19 02:01:47.836382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.232 [2024-11-19 02:01:47.836422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.232 [2024-11-19 02:01:47.836453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:37.514 [2024-11-19 02:01:47.840934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.514 [2024-11-19 02:01:47.840990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.514 [2024-11-19 02:01:47.841006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:37.514 [2024-11-19 02:01:47.845256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.514 [2024-11-19 02:01:47.845295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.514 [2024-11-19 02:01:47.845326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:37.514 [2024-11-19 02:01:47.849547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.514 [2024-11-19 02:01:47.849586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.514 [2024-11-19 02:01:47.849617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:37.514 [2024-11-19 02:01:47.853841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.514 [2024-11-19 02:01:47.853879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.514 [2024-11-19 02:01:47.853910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:37.514 [2024-11-19 02:01:47.858255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.514 [2024-11-19 02:01:47.858339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.514 [2024-11-19 02:01:47.858369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:37.514 [2024-11-19 02:01:47.862604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.514 [2024-11-19 02:01:47.862671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.514 [2024-11-19 02:01:47.862700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:37.514 [2024-11-19 02:01:47.866982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.514 [2024-11-19 02:01:47.867034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.514 [2024-11-19 02:01:47.867063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:37.514 [2024-11-19 02:01:47.871165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.514 [2024-11-19 02:01:47.871200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.514 [2024-11-19 02:01:47.871229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:37.514 [2024-11-19 02:01:47.875193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.514 [2024-11-19 02:01:47.875228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.514 [2024-11-19 02:01:47.875257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:37.514 [2024-11-19 02:01:47.879335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.514 [2024-11-19 02:01:47.879371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.514 [2024-11-19 02:01:47.879400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:37.514 [2024-11-19 02:01:47.883366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.514 [2024-11-19 02:01:47.883400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.514 [2024-11-19 02:01:47.883429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:37.514 8023.00 IOPS, 1002.88 MiB/s [2024-11-19T02:01:48.129Z] [2024-11-19 02:01:47.888572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e4ecc0) 00:20:37.514 [2024-11-19 02:01:47.888630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.514 [2024-11-19 02:01:47.888644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:37.514 00:20:37.514 Latency(us) 00:20:37.514 [2024-11-19T02:01:48.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.514 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:37.514 nvme0n1 : 2.00 8018.96 1002.37 0.00 0.00 1991.79 767.07 12213.53 00:20:37.514 [2024-11-19T02:01:48.129Z] =================================================================================================================== 00:20:37.514 [2024-11-19T02:01:48.129Z] Total : 8018.96 1002.37 0.00 0.00 1991.79 767.07 12213.53 00:20:37.514 { 00:20:37.514 "results": [ 00:20:37.514 { 00:20:37.514 "job": "nvme0n1", 00:20:37.514 "core_mask": "0x2", 00:20:37.514 "workload": "randread", 00:20:37.514 "status": "finished", 00:20:37.514 "queue_depth": 16, 00:20:37.514 "io_size": 131072, 00:20:37.514 "runtime": 2.003004, 00:20:37.514 "iops": 8018.955528795749, 00:20:37.514 "mibps": 1002.3694410994686, 00:20:37.514 "io_failed": 0, 00:20:37.514 "io_timeout": 0, 00:20:37.514 "avg_latency_us": 1991.7887857280311, 00:20:37.514 "min_latency_us": 767.069090909091, 00:20:37.514 "max_latency_us": 12213.527272727273 00:20:37.514 } 00:20:37.514 ], 00:20:37.514 "core_count": 1 00:20:37.514 } 00:20:37.514 02:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:37.514 02:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:37.514 02:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:37.514 | .driver_specific 00:20:37.514 | .nvme_error 00:20:37.514 | .status_code 00:20:37.514 | .command_transient_transport_error' 00:20:37.514 02:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:37.785 02:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 519 > 0 )) 00:20:37.785 02:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94856 00:20:37.785 02:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 94856 ']' 00:20:37.785 02:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 94856 00:20:37.785 02:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:37.785 02:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:37.785 02:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94856 00:20:37.785 killing process with pid 94856 00:20:37.785 Received shutdown signal, test time was about 2.000000 seconds 00:20:37.785 00:20:37.785 Latency(us) 00:20:37.785 [2024-11-19T02:01:48.400Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.785 [2024-11-19T02:01:48.400Z] =================================================================================================================== 00:20:37.785 [2024-11-19T02:01:48.400Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:37.785 02:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:37.785 02:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:37.785 02:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94856' 00:20:37.785 02:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 94856 00:20:37.785 02:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 94856 00:20:37.785 02:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:20:37.785 02:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:37.785 02:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:37.785 02:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:37.785 02:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:37.785 02:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94909 00:20:37.785 02:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94909 /var/tmp/bperf.sock 00:20:37.785 02:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:20:37.785 02:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 94909 ']' 00:20:37.785 02:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:37.785 02:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:37.785 02:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:37.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:37.785 02:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:37.785 02:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:38.044 [2024-11-19 02:01:48.450388] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:20:38.044 [2024-11-19 02:01:48.450684] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94909 ] 00:20:38.044 [2024-11-19 02:01:48.592198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.044 [2024-11-19 02:01:48.611991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.044 [2024-11-19 02:01:48.640325] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:38.303 02:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:38.303 02:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:38.303 02:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:38.303 02:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:38.562 02:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:38.562 02:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.562 02:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:38.562 02:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.562 02:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:38.562 02:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:38.820 nvme0n1 00:20:38.820 02:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:38.820 02:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.820 02:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:38.820 02:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.820 02:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:38.820 02:01:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:38.820 Running I/O for 2 seconds... 00:20:38.820 [2024-11-19 02:01:49.415702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166fe2e8 00:20:38.820 [2024-11-19 02:01:49.417117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.820 [2024-11-19 02:01:49.417330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:38.820 [2024-11-19 02:01:49.430102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166feb58 00:20:38.820 [2024-11-19 02:01:49.431491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:38.820 [2024-11-19 02:01:49.431736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.079 [2024-11-19 02:01:49.451307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166fef90 00:20:39.079 [2024-11-19 02:01:49.453723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.079 [2024-11-19 02:01:49.453960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:39.079 [2024-11-19 02:01:49.465917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166feb58 00:20:39.079 [2024-11-19 02:01:49.468311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.079 [2024-11-19 02:01:49.468540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:39.079 [2024-11-19 02:01:49.479964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166fe2e8 00:20:39.079 [2024-11-19 02:01:49.482323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.079 [2024-11-19 02:01:49.482548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:39.079 [2024-11-19 02:01:49.493868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166fda78 00:20:39.079 [2024-11-19 02:01:49.496195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.079 [2024-11-19 02:01:49.496394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:39.079 [2024-11-19 02:01:49.508095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166fd208 00:20:39.079 [2024-11-19 02:01:49.510552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.079 [2024-11-19 02:01:49.510756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:39.079 [2024-11-19 02:01:49.522428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166fc998 00:20:39.079 [2024-11-19 02:01:49.524738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.079 [2024-11-19 02:01:49.524917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:39.079 [2024-11-19 02:01:49.536569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166fc128 00:20:39.079 [2024-11-19 02:01:49.538729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.079 [2024-11-19 02:01:49.538762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:39.079 [2024-11-19 02:01:49.550111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166fb8b8 00:20:39.079 [2024-11-19 02:01:49.552319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.079 [2024-11-19 02:01:49.552347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:39.079 [2024-11-19 02:01:49.563748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166fb048 00:20:39.079 [2024-11-19 02:01:49.565785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.079 [2024-11-19 02:01:49.566005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:39.079 [2024-11-19 02:01:49.577442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166fa7d8 00:20:39.079 [2024-11-19 02:01:49.579590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.079 [2024-11-19 02:01:49.579623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:39.079 [2024-11-19 02:01:49.590957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f9f68 00:20:39.079 [2024-11-19 02:01:49.593045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.079 [2024-11-19 02:01:49.593077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:39.079 [2024-11-19 02:01:49.604452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f96f8 00:20:39.079 [2024-11-19 02:01:49.606696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.079 [2024-11-19 02:01:49.606729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:39.079 [2024-11-19 02:01:49.618370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f8e88 00:20:39.079 [2024-11-19 02:01:49.620399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.079 [2024-11-19 02:01:49.620431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:39.079 [2024-11-19 02:01:49.632090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f8618 00:20:39.079 [2024-11-19 02:01:49.634252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.079 [2024-11-19 02:01:49.634296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:39.079 [2024-11-19 02:01:49.645842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f7da8 00:20:39.079 [2024-11-19 02:01:49.647843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.079 [2024-11-19 02:01:49.647875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:39.079 [2024-11-19 02:01:49.659245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f7538 00:20:39.079 [2024-11-19 02:01:49.661271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.079 [2024-11-19 02:01:49.661303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:39.079 [2024-11-19 02:01:49.672778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f6cc8 00:20:39.079 [2024-11-19 02:01:49.674787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.079 [2024-11-19 02:01:49.674818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:39.079 [2024-11-19 02:01:49.686190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f6458 00:20:39.079 [2024-11-19 02:01:49.688247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.079 [2024-11-19 02:01:49.688273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:39.339 [2024-11-19 02:01:49.700968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f5be8 00:20:39.339 [2024-11-19 02:01:49.703258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.339 [2024-11-19 02:01:49.703290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:39.339 [2024-11-19 02:01:49.715177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f5378 00:20:39.339 [2024-11-19 02:01:49.717144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.339 [2024-11-19 02:01:49.717175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:39.339 [2024-11-19 02:01:49.728801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f4b08 00:20:39.339 [2024-11-19 02:01:49.730788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.339 [2024-11-19 02:01:49.730820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:39.339 [2024-11-19 02:01:49.742456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f4298 00:20:39.339 [2024-11-19 02:01:49.744373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.339 [2024-11-19 02:01:49.744405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:39.339 [2024-11-19 02:01:49.755934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f3a28 00:20:39.339 [2024-11-19 02:01:49.757759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.339 [2024-11-19 02:01:49.757965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:39.339 [2024-11-19 02:01:49.769776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f31b8 00:20:39.339 [2024-11-19 02:01:49.771767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.339 [2024-11-19 02:01:49.771970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:39.339 [2024-11-19 02:01:49.783860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f2948 00:20:39.339 [2024-11-19 02:01:49.785917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.339 [2024-11-19 02:01:49.785989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:39.339 [2024-11-19 02:01:49.797619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f20d8 00:20:39.339 [2024-11-19 02:01:49.799449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.339 [2024-11-19 02:01:49.799481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:39.339 [2024-11-19 02:01:49.811456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f1868 00:20:39.339 [2024-11-19 02:01:49.813345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.339 [2024-11-19 02:01:49.813378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:39.339 [2024-11-19 02:01:49.825070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f0ff8 00:20:39.339 [2024-11-19 02:01:49.826961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.339 [2024-11-19 02:01:49.826994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:39.339 [2024-11-19 02:01:49.838838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f0788 00:20:39.339 [2024-11-19 02:01:49.840588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.339 [2024-11-19 02:01:49.840621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:39.339 [2024-11-19 02:01:49.852301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166eff18 00:20:39.339 [2024-11-19 02:01:49.854231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.339 [2024-11-19 02:01:49.854264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:39.339 [2024-11-19 02:01:49.865980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166ef6a8 00:20:39.339 [2024-11-19 02:01:49.867744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.339 [2024-11-19 02:01:49.867774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:39.339 [2024-11-19 02:01:49.879872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166eee38 00:20:39.339 [2024-11-19 02:01:49.881553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.339 [2024-11-19 02:01:49.881592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:39.339 [2024-11-19 02:01:49.893393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166ee5c8 00:20:39.339 [2024-11-19 02:01:49.895263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.339 [2024-11-19 02:01:49.895295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:39.339 [2024-11-19 02:01:49.907165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166edd58 00:20:39.339 [2024-11-19 02:01:49.908917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.339 [2024-11-19 02:01:49.908950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:39.339 [2024-11-19 02:01:49.920893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166ed4e8 00:20:39.339 [2024-11-19 02:01:49.922937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.339 [2024-11-19 02:01:49.922969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:39.339 [2024-11-19 02:01:49.934845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166ecc78 00:20:39.339 [2024-11-19 02:01:49.936497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.339 [2024-11-19 02:01:49.936572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:39.339 [2024-11-19 02:01:49.948262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166ec408 00:20:39.339 [2024-11-19 02:01:49.950029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.339 [2024-11-19 02:01:49.950063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:39.599 [2024-11-19 02:01:49.963016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166ebb98 00:20:39.599 [2024-11-19 02:01:49.964658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.599 [2024-11-19 02:01:49.964691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:39.599 [2024-11-19 02:01:49.976597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166eb328 00:20:39.599 [2024-11-19 02:01:49.978336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.599 [2024-11-19 02:01:49.978375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:39.599 [2024-11-19 02:01:49.990630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166eaab8 00:20:39.599 [2024-11-19 02:01:49.992188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.599 [2024-11-19 02:01:49.992218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:39.599 [2024-11-19 02:01:50.005399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166ea248 00:20:39.599 [2024-11-19 02:01:50.007226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.599 [2024-11-19 02:01:50.007261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:39.599 [2024-11-19 02:01:50.020588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e99d8 00:20:39.599 [2024-11-19 02:01:50.022378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.599 [2024-11-19 02:01:50.022430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:39.599 [2024-11-19 02:01:50.035949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e9168 00:20:39.599 [2024-11-19 02:01:50.037515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.599 [2024-11-19 02:01:50.037573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:39.599 [2024-11-19 02:01:50.049585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e88f8 00:20:39.599 [2024-11-19 02:01:50.051183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.599 [2024-11-19 02:01:50.051215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:39.599 [2024-11-19 02:01:50.063346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e8088 00:20:39.599 [2024-11-19 02:01:50.064994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.599 [2024-11-19 02:01:50.065026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:39.599 [2024-11-19 02:01:50.077103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e7818 00:20:39.599 [2024-11-19 02:01:50.079235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.599 [2024-11-19 02:01:50.079267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:39.599 [2024-11-19 02:01:50.091327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e6fa8 00:20:39.599 [2024-11-19 02:01:50.092926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.599 [2024-11-19 02:01:50.092958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:39.599 [2024-11-19 02:01:50.104917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e6738 00:20:39.599 [2024-11-19 02:01:50.106845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.599 [2024-11-19 02:01:50.106879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:39.599 [2024-11-19 02:01:50.119066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e5ec8 00:20:39.599 [2024-11-19 02:01:50.120500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.599 [2024-11-19 02:01:50.120726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:39.599 [2024-11-19 02:01:50.132898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e5658 00:20:39.599 [2024-11-19 02:01:50.134589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.599 [2024-11-19 02:01:50.134793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:39.599 [2024-11-19 02:01:50.147221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e4de8 00:20:39.599 [2024-11-19 02:01:50.148826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.599 [2024-11-19 02:01:50.149000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:39.599 [2024-11-19 02:01:50.161354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e4578 00:20:39.599 [2024-11-19 02:01:50.162933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.599 [2024-11-19 02:01:50.163135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:39.599 [2024-11-19 02:01:50.175533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e3d08 00:20:39.599 [2024-11-19 02:01:50.177062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.599 [2024-11-19 02:01:50.177261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:39.599 [2024-11-19 02:01:50.189592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e3498 00:20:39.599 [2024-11-19 02:01:50.191131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.599 [2024-11-19 02:01:50.191331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:39.599 [2024-11-19 02:01:50.203823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e2c28 00:20:39.599 [2024-11-19 02:01:50.205330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.599 [2024-11-19 02:01:50.205543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:39.859 [2024-11-19 02:01:50.219110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e23b8 00:20:39.859 [2024-11-19 02:01:50.220764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.859 [2024-11-19 02:01:50.220944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:39.859 [2024-11-19 02:01:50.233256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e1b48 00:20:39.859 [2024-11-19 02:01:50.234828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.859 [2024-11-19 02:01:50.235048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:39.859 [2024-11-19 02:01:50.247646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e12d8 00:20:39.859 [2024-11-19 02:01:50.249090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.859 [2024-11-19 02:01:50.249289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:39.859 [2024-11-19 02:01:50.261770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e0a68 00:20:39.859 [2024-11-19 02:01:50.263411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.859 [2024-11-19 02:01:50.263446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:39.859 [2024-11-19 02:01:50.275538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e01f8 00:20:39.859 [2024-11-19 02:01:50.276800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.859 [2024-11-19 02:01:50.276833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:39.859 [2024-11-19 02:01:50.288941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166df988 00:20:39.859 [2024-11-19 02:01:50.290212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.859 [2024-11-19 02:01:50.290276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:39.859 [2024-11-19 02:01:50.303546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166df118 00:20:39.859 [2024-11-19 02:01:50.304953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.859 [2024-11-19 02:01:50.304986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:39.859 [2024-11-19 02:01:50.319227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166de8a8 00:20:39.859 [2024-11-19 02:01:50.320642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.859 [2024-11-19 02:01:50.320677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:39.859 [2024-11-19 02:01:50.334870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166de038 00:20:39.859 [2024-11-19 02:01:50.336283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.859 [2024-11-19 02:01:50.336326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:39.859 [2024-11-19 02:01:50.358791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166de038 00:20:39.859 [2024-11-19 02:01:50.361334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.859 [2024-11-19 02:01:50.361366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:39.859 [2024-11-19 02:01:50.374991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166de8a8 00:20:39.859 [2024-11-19 02:01:50.377392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.859 [2024-11-19 02:01:50.377425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:39.859 [2024-11-19 02:01:50.390298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166df118 00:20:39.859 [2024-11-19 02:01:50.392636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.859 [2024-11-19 02:01:50.392670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:39.859 17839.00 IOPS, 69.68 MiB/s [2024-11-19T02:01:50.474Z] [2024-11-19 02:01:50.406168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166df988 00:20:39.859 [2024-11-19 02:01:50.408535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.859 [2024-11-19 02:01:50.408570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:39.859 [2024-11-19 02:01:50.420486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e01f8 00:20:39.859 [2024-11-19 02:01:50.422929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.859 [2024-11-19 02:01:50.422962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:39.859 [2024-11-19 02:01:50.435405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e0a68 00:20:39.859 [2024-11-19 02:01:50.437653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.859 [2024-11-19 02:01:50.437687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:39.859 [2024-11-19 02:01:50.450001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e12d8 00:20:39.859 [2024-11-19 02:01:50.452231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.859 [2024-11-19 02:01:50.452264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:39.859 [2024-11-19 02:01:50.464490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e1b48 00:20:39.859 [2024-11-19 02:01:50.466770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:39.859 [2024-11-19 02:01:50.466802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:40.119 [2024-11-19 02:01:50.480298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e23b8 00:20:40.119 [2024-11-19 02:01:50.482526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.119 [2024-11-19 02:01:50.482582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:40.119 [2024-11-19 02:01:50.494958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e2c28 00:20:40.119 [2024-11-19 02:01:50.497089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.119 [2024-11-19 02:01:50.497121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:40.119 [2024-11-19 02:01:50.509408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e3498 00:20:40.119 [2024-11-19 02:01:50.511708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.119 [2024-11-19 02:01:50.511740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:40.119 [2024-11-19 02:01:50.523588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e3d08 00:20:40.119 [2024-11-19 02:01:50.525577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.119 [2024-11-19 02:01:50.525609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:40.119 [2024-11-19 02:01:50.537156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e4578 00:20:40.119 [2024-11-19 02:01:50.539228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.119 [2024-11-19 02:01:50.539259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:40.119 [2024-11-19 02:01:50.550936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e4de8 00:20:40.119 [2024-11-19 02:01:50.552945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.119 [2024-11-19 02:01:50.552976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.119 [2024-11-19 02:01:50.564391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e5658 00:20:40.119 [2024-11-19 02:01:50.566493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.119 [2024-11-19 02:01:50.566549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:40.119 [2024-11-19 02:01:50.577985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e5ec8 00:20:40.119 [2024-11-19 02:01:50.579990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.119 [2024-11-19 02:01:50.580021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:40.119 [2024-11-19 02:01:50.591603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e6738 00:20:40.119 [2024-11-19 02:01:50.593831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.119 [2024-11-19 02:01:50.594035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:40.119 [2024-11-19 02:01:50.605676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e6fa8 00:20:40.119 [2024-11-19 02:01:50.607727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.119 [2024-11-19 02:01:50.607754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:40.119 [2024-11-19 02:01:50.619428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e7818 00:20:40.119 [2024-11-19 02:01:50.621369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.119 [2024-11-19 02:01:50.621401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:40.119 [2024-11-19 02:01:50.632989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e8088 00:20:40.119 [2024-11-19 02:01:50.634917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.119 [2024-11-19 02:01:50.634948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:40.119 [2024-11-19 02:01:50.646639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e88f8 00:20:40.119 [2024-11-19 02:01:50.648522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.119 [2024-11-19 02:01:50.648579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:40.119 [2024-11-19 02:01:50.660100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e9168 00:20:40.119 [2024-11-19 02:01:50.662062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.119 [2024-11-19 02:01:50.662094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:40.119 [2024-11-19 02:01:50.673690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166e99d8 00:20:40.119 [2024-11-19 02:01:50.675570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.119 [2024-11-19 02:01:50.675600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:40.119 [2024-11-19 02:01:50.687083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166ea248 00:20:40.119 [2024-11-19 02:01:50.688986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.119 [2024-11-19 02:01:50.689016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:40.119 [2024-11-19 02:01:50.700723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166eaab8 00:20:40.119 [2024-11-19 02:01:50.702616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.120 [2024-11-19 02:01:50.702647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:40.120 [2024-11-19 02:01:50.714412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166eb328 00:20:40.120 [2024-11-19 02:01:50.716323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.120 [2024-11-19 02:01:50.716349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:40.120 [2024-11-19 02:01:50.728108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166ebb98 00:20:40.120 [2024-11-19 02:01:50.729943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.120 [2024-11-19 02:01:50.730151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:40.379 [2024-11-19 02:01:50.743227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166ec408 00:20:40.379 [2024-11-19 02:01:50.745066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.379 [2024-11-19 02:01:50.745098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:40.379 [2024-11-19 02:01:50.756934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166ecc78 00:20:40.379 [2024-11-19 02:01:50.758735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.379 [2024-11-19 02:01:50.758765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:40.379 [2024-11-19 02:01:50.770597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166ed4e8 00:20:40.379 [2024-11-19 02:01:50.772301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.379 [2024-11-19 02:01:50.772332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.379 [2024-11-19 02:01:50.784268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166edd58 00:20:40.379 [2024-11-19 02:01:50.786099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.379 [2024-11-19 02:01:50.786133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:40.379 [2024-11-19 02:01:50.797844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166ee5c8 00:20:40.379 [2024-11-19 02:01:50.799604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.379 [2024-11-19 02:01:50.799635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:40.379 [2024-11-19 02:01:50.811503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166eee38 00:20:40.379 [2024-11-19 02:01:50.813344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.379 [2024-11-19 02:01:50.813375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:40.379 [2024-11-19 02:01:50.825116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166ef6a8 00:20:40.379 [2024-11-19 02:01:50.826927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.379 [2024-11-19 02:01:50.826959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:40.379 [2024-11-19 02:01:50.839585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166eff18 00:20:40.379 [2024-11-19 02:01:50.841237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.379 [2024-11-19 02:01:50.841268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:40.379 [2024-11-19 02:01:50.853134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f0788 00:20:40.379 [2024-11-19 02:01:50.854952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.379 [2024-11-19 02:01:50.854984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:40.379 [2024-11-19 02:01:50.866864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f0ff8 00:20:40.380 [2024-11-19 02:01:50.868501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.380 [2024-11-19 02:01:50.868559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:40.380 [2024-11-19 02:01:50.880682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f1868 00:20:40.380 [2024-11-19 02:01:50.882659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.380 [2024-11-19 02:01:50.882690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:40.380 [2024-11-19 02:01:50.894577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f20d8 00:20:40.380 [2024-11-19 02:01:50.896139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.380 [2024-11-19 02:01:50.896169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:40.380 [2024-11-19 02:01:50.908113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f2948 00:20:40.380 [2024-11-19 02:01:50.909696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.380 [2024-11-19 02:01:50.909882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:40.380 [2024-11-19 02:01:50.921945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f31b8 00:20:40.380 [2024-11-19 02:01:50.923842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.380 [2024-11-19 02:01:50.923873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:40.380 [2024-11-19 02:01:50.935783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f3a28 00:20:40.380 [2024-11-19 02:01:50.937344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.380 [2024-11-19 02:01:50.937375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:40.380 [2024-11-19 02:01:50.949377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f4298 00:20:40.380 [2024-11-19 02:01:50.951083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.380 [2024-11-19 02:01:50.951114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:40.380 [2024-11-19 02:01:50.963157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f4b08 00:20:40.380 [2024-11-19 02:01:50.964682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.380 [2024-11-19 02:01:50.964715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:40.380 [2024-11-19 02:01:50.976635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f5378 00:20:40.380 [2024-11-19 02:01:50.978170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.380 [2024-11-19 02:01:50.978359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:40.380 [2024-11-19 02:01:50.990583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f5be8 00:20:40.380 [2024-11-19 02:01:50.992077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.380 [2024-11-19 02:01:50.992111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.640 [2024-11-19 02:01:51.005136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f6458 00:20:40.640 [2024-11-19 02:01:51.006731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.640 [2024-11-19 02:01:51.006765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:40.640 [2024-11-19 02:01:51.018986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f6cc8 00:20:40.640 [2024-11-19 02:01:51.020429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.640 [2024-11-19 02:01:51.020461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:40.640 [2024-11-19 02:01:51.032659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f7538 00:20:40.640 [2024-11-19 02:01:51.034244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.640 [2024-11-19 02:01:51.034288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:40.640 [2024-11-19 02:01:51.046360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f7da8 00:20:40.640 [2024-11-19 02:01:51.047856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.640 [2024-11-19 02:01:51.047889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:40.640 [2024-11-19 02:01:51.060141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f8618 00:20:40.640 [2024-11-19 02:01:51.061555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.640 [2024-11-19 02:01:51.061613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:40.640 [2024-11-19 02:01:51.073684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f8e88 00:20:40.640 [2024-11-19 02:01:51.075130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.640 [2024-11-19 02:01:51.075161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:40.640 [2024-11-19 02:01:51.087152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f96f8 00:20:40.640 [2024-11-19 02:01:51.088535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.640 [2024-11-19 02:01:51.088591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:40.640 [2024-11-19 02:01:51.100678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f9f68 00:20:40.640 [2024-11-19 02:01:51.102056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.640 [2024-11-19 02:01:51.102088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:40.640 [2024-11-19 02:01:51.114388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166fa7d8 00:20:40.640 [2024-11-19 02:01:51.115916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.640 [2024-11-19 02:01:51.116050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:40.640 [2024-11-19 02:01:51.128454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166fb048 00:20:40.640 [2024-11-19 02:01:51.130004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.640 [2024-11-19 02:01:51.130182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:40.640 [2024-11-19 02:01:51.142889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166fb8b8 00:20:40.640 [2024-11-19 02:01:51.144330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.640 [2024-11-19 02:01:51.144557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:40.640 [2024-11-19 02:01:51.156930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166fc128 00:20:40.640 [2024-11-19 02:01:51.158452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.640 [2024-11-19 02:01:51.158685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:40.640 [2024-11-19 02:01:51.171117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166fc998 00:20:40.640 [2024-11-19 02:01:51.172562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.640 [2024-11-19 02:01:51.172797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:40.640 [2024-11-19 02:01:51.185235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166fd208 00:20:40.640 [2024-11-19 02:01:51.186709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.640 [2024-11-19 02:01:51.186913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:40.640 [2024-11-19 02:01:51.199459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166fda78 00:20:40.640 [2024-11-19 02:01:51.200904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.640 [2024-11-19 02:01:51.201093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:40.640 [2024-11-19 02:01:51.213556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166fe2e8 00:20:40.640 [2024-11-19 02:01:51.214952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.640 [2024-11-19 02:01:51.215150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.640 [2024-11-19 02:01:51.227752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166feb58 00:20:40.640 [2024-11-19 02:01:51.229093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.640 [2024-11-19 02:01:51.229294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:40.640 [2024-11-19 02:01:51.247430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166fef90 00:20:40.640 [2024-11-19 02:01:51.249786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.640 [2024-11-19 02:01:51.249972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:40.901 [2024-11-19 02:01:51.262465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166feb58 00:20:40.901 [2024-11-19 02:01:51.264857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.901 [2024-11-19 02:01:51.265041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:40.901 [2024-11-19 02:01:51.276641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166fe2e8 00:20:40.901 [2024-11-19 02:01:51.278889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.901 [2024-11-19 02:01:51.278922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:40.901 [2024-11-19 02:01:51.290122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166fda78 00:20:40.901 [2024-11-19 02:01:51.292580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.901 [2024-11-19 02:01:51.292637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:40.901 [2024-11-19 02:01:51.303963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166fd208 00:20:40.901 [2024-11-19 02:01:51.306439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.901 [2024-11-19 02:01:51.306470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:40.901 [2024-11-19 02:01:51.317773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166fc998 00:20:40.901 [2024-11-19 02:01:51.319950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.901 [2024-11-19 02:01:51.319981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:40.901 [2024-11-19 02:01:51.331405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166fc128 00:20:40.901 [2024-11-19 02:01:51.333519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.901 [2024-11-19 02:01:51.333549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:40.901 [2024-11-19 02:01:51.345247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166fb8b8 00:20:40.901 [2024-11-19 02:01:51.347525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.901 [2024-11-19 02:01:51.347584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:40.901 [2024-11-19 02:01:51.359023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166fb048 00:20:40.901 [2024-11-19 02:01:51.361339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.901 [2024-11-19 02:01:51.361372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:40.901 [2024-11-19 02:01:51.374729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166fa7d8 00:20:40.901 [2024-11-19 02:01:51.377232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.901 [2024-11-19 02:01:51.377268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:40.901 [2024-11-19 02:01:51.390774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f9f68 00:20:40.901 [2024-11-19 02:01:51.392972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.901 [2024-11-19 02:01:51.393002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:40.901 17964.50 IOPS, 70.17 MiB/s [2024-11-19T02:01:51.516Z] [2024-11-19 02:01:51.406893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd2a0) with pdu=0x2000166f96f8 00:20:40.901 [2024-11-19 02:01:51.408874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:40.901 [2024-11-19 02:01:51.408905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:40.901 00:20:40.901 Latency(us) 00:20:40.901 [2024-11-19T02:01:51.516Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.901 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:40.901 nvme0n1 : 2.01 17952.27 70.13 0.00 0.00 7124.03 4379.00 27644.28 00:20:40.901 [2024-11-19T02:01:51.516Z] =================================================================================================================== 00:20:40.901 [2024-11-19T02:01:51.516Z] Total : 17952.27 70.13 0.00 0.00 7124.03 4379.00 27644.28 00:20:40.901 { 00:20:40.901 "results": [ 00:20:40.901 { 00:20:40.901 "job": "nvme0n1", 00:20:40.901 "core_mask": "0x2", 00:20:40.901 "workload": "randwrite", 00:20:40.901 "status": "finished", 00:20:40.901 "queue_depth": 128, 00:20:40.901 "io_size": 4096, 00:20:40.901 "runtime": 2.008492, 00:20:40.901 "iops": 17952.274641870616, 00:20:40.901 "mibps": 70.12607281980709, 00:20:40.901 "io_failed": 0, 00:20:40.901 "io_timeout": 0, 00:20:40.901 "avg_latency_us": 7124.026763684772, 00:20:40.901 "min_latency_us": 4378.996363636364, 00:20:40.901 "max_latency_us": 27644.276363636363 00:20:40.901 } 00:20:40.901 ], 00:20:40.901 "core_count": 1 00:20:40.901 } 00:20:40.901 02:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:40.901 02:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:40.901 02:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:40.901 | .driver_specific 00:20:40.901 | .nvme_error 00:20:40.901 | .status_code 00:20:40.901 | .command_transient_transport_error' 00:20:40.901 02:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:41.161 02:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 141 > 0 )) 00:20:41.161 02:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94909 00:20:41.161 02:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 94909 ']' 00:20:41.161 02:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 94909 00:20:41.161 02:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:41.161 02:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:41.161 02:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94909 00:20:41.161 killing process with pid 94909 00:20:41.161 Received shutdown signal, test time was about 2.000000 seconds 00:20:41.161 00:20:41.161 Latency(us) 00:20:41.161 [2024-11-19T02:01:51.776Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.161 [2024-11-19T02:01:51.776Z] =================================================================================================================== 00:20:41.161 [2024-11-19T02:01:51.776Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:41.161 02:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:41.161 02:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:41.161 02:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94909' 00:20:41.161 02:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 94909 00:20:41.161 02:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 94909 00:20:41.420 02:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:20:41.420 02:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:41.420 02:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:41.420 02:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:41.420 02:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:41.420 02:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94956 00:20:41.420 02:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94956 /var/tmp/bperf.sock 00:20:41.420 02:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:20:41.420 02:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 94956 ']' 00:20:41.420 02:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:41.420 02:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:41.420 02:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:41.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:41.420 02:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:41.420 02:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:41.420 [2024-11-19 02:01:51.910314] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:20:41.420 [2024-11-19 02:01:51.910622] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94956 ] 00:20:41.420 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:41.420 Zero copy mechanism will not be used. 00:20:41.679 [2024-11-19 02:01:52.056347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.679 [2024-11-19 02:01:52.075636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.679 [2024-11-19 02:01:52.104076] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:41.679 02:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:41.679 02:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:41.679 02:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:41.679 02:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:41.938 02:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:41.938 02:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.938 02:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:41.939 02:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.939 02:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:41.939 02:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:42.198 nvme0n1 00:20:42.198 02:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:42.198 02:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.198 02:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:42.198 02:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.198 02:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:42.198 02:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:42.458 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:42.458 Zero copy mechanism will not be used. 00:20:42.458 Running I/O for 2 seconds... 00:20:42.458 [2024-11-19 02:01:52.828640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.458 [2024-11-19 02:01:52.828903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.458 [2024-11-19 02:01:52.828933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.458 [2024-11-19 02:01:52.834124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.458 [2024-11-19 02:01:52.834228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.458 [2024-11-19 02:01:52.834254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.458 [2024-11-19 02:01:52.838938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.458 [2024-11-19 02:01:52.839067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.458 [2024-11-19 02:01:52.839088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.458 [2024-11-19 02:01:52.843483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.458 [2024-11-19 02:01:52.843613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.458 [2024-11-19 02:01:52.843635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.458 [2024-11-19 02:01:52.848334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.459 [2024-11-19 02:01:52.848587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.459 [2024-11-19 02:01:52.848622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.459 [2024-11-19 02:01:52.853162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.459 [2024-11-19 02:01:52.853262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.459 [2024-11-19 02:01:52.853283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.459 [2024-11-19 02:01:52.857687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.459 [2024-11-19 02:01:52.857786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.459 [2024-11-19 02:01:52.857807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.459 [2024-11-19 02:01:52.862530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.459 [2024-11-19 02:01:52.862646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.459 [2024-11-19 02:01:52.862683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.459 [2024-11-19 02:01:52.867155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.459 [2024-11-19 02:01:52.867284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.459 [2024-11-19 02:01:52.867304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.459 [2024-11-19 02:01:52.871866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.459 [2024-11-19 02:01:52.871964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.459 [2024-11-19 02:01:52.871985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.459 [2024-11-19 02:01:52.876390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.459 [2024-11-19 02:01:52.876650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.459 [2024-11-19 02:01:52.876672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.459 [2024-11-19 02:01:52.881460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.459 [2024-11-19 02:01:52.881594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.459 [2024-11-19 02:01:52.881616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.459 [2024-11-19 02:01:52.885988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.459 [2024-11-19 02:01:52.886120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.459 [2024-11-19 02:01:52.886143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.459 [2024-11-19 02:01:52.890765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.459 [2024-11-19 02:01:52.890865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.459 [2024-11-19 02:01:52.890901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.459 [2024-11-19 02:01:52.895608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.459 [2024-11-19 02:01:52.895710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.459 [2024-11-19 02:01:52.895731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.459 [2024-11-19 02:01:52.900238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.459 [2024-11-19 02:01:52.900340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.459 [2024-11-19 02:01:52.900361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.459 [2024-11-19 02:01:52.904925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.459 [2024-11-19 02:01:52.905022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.459 [2024-11-19 02:01:52.905043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.459 [2024-11-19 02:01:52.909753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.459 [2024-11-19 02:01:52.909854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.459 [2024-11-19 02:01:52.909875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.459 [2024-11-19 02:01:52.914487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.459 [2024-11-19 02:01:52.914619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.459 [2024-11-19 02:01:52.914657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.459 [2024-11-19 02:01:52.919328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.459 [2024-11-19 02:01:52.919606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.459 [2024-11-19 02:01:52.919643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.459 [2024-11-19 02:01:52.924471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.459 [2024-11-19 02:01:52.924649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.459 [2024-11-19 02:01:52.924671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.459 [2024-11-19 02:01:52.929235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.459 [2024-11-19 02:01:52.929334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.459 [2024-11-19 02:01:52.929355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.459 [2024-11-19 02:01:52.933906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.459 [2024-11-19 02:01:52.934035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.459 [2024-11-19 02:01:52.934058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.459 [2024-11-19 02:01:52.938635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.459 [2024-11-19 02:01:52.938776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.459 [2024-11-19 02:01:52.938798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.459 [2024-11-19 02:01:52.943190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.459 [2024-11-19 02:01:52.943321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.459 [2024-11-19 02:01:52.943342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.459 [2024-11-19 02:01:52.947632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.459 [2024-11-19 02:01:52.947757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.459 [2024-11-19 02:01:52.947778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.459 [2024-11-19 02:01:52.952025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.459 [2024-11-19 02:01:52.952150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.459 [2024-11-19 02:01:52.952170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.459 [2024-11-19 02:01:52.956404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.459 [2024-11-19 02:01:52.956557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.459 [2024-11-19 02:01:52.956577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.459 [2024-11-19 02:01:52.960856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.459 [2024-11-19 02:01:52.960965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.459 [2024-11-19 02:01:52.960986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.459 [2024-11-19 02:01:52.965155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.459 [2024-11-19 02:01:52.965281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.459 [2024-11-19 02:01:52.965300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.459 [2024-11-19 02:01:52.969675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.459 [2024-11-19 02:01:52.969773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.459 [2024-11-19 02:01:52.969793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.460 [2024-11-19 02:01:52.973993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.460 [2024-11-19 02:01:52.974260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.460 [2024-11-19 02:01:52.974297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.460 [2024-11-19 02:01:52.978794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.460 [2024-11-19 02:01:52.978895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.460 [2024-11-19 02:01:52.978931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.460 [2024-11-19 02:01:52.983158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.460 [2024-11-19 02:01:52.983280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.460 [2024-11-19 02:01:52.983300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.460 [2024-11-19 02:01:52.987598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.460 [2024-11-19 02:01:52.987695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.460 [2024-11-19 02:01:52.987716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.460 [2024-11-19 02:01:52.991984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.460 [2024-11-19 02:01:52.992109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.460 [2024-11-19 02:01:52.992129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.460 [2024-11-19 02:01:52.996374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.460 [2024-11-19 02:01:52.996501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.460 [2024-11-19 02:01:52.996521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.460 [2024-11-19 02:01:53.000841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.460 [2024-11-19 02:01:53.000937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.460 [2024-11-19 02:01:53.000958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.460 [2024-11-19 02:01:53.005198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.460 [2024-11-19 02:01:53.005323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.460 [2024-11-19 02:01:53.005344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.460 [2024-11-19 02:01:53.009645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.460 [2024-11-19 02:01:53.009742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.460 [2024-11-19 02:01:53.009762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.460 [2024-11-19 02:01:53.014006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.460 [2024-11-19 02:01:53.014136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.460 [2024-11-19 02:01:53.014157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.460 [2024-11-19 02:01:53.018507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.460 [2024-11-19 02:01:53.018655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.460 [2024-11-19 02:01:53.018693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.460 [2024-11-19 02:01:53.022900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.460 [2024-11-19 02:01:53.023001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.460 [2024-11-19 02:01:53.023021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.460 [2024-11-19 02:01:53.027236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.460 [2024-11-19 02:01:53.027355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.460 [2024-11-19 02:01:53.027376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.460 [2024-11-19 02:01:53.031779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.460 [2024-11-19 02:01:53.031877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.460 [2024-11-19 02:01:53.031913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.460 [2024-11-19 02:01:53.036147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.460 [2024-11-19 02:01:53.036265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.460 [2024-11-19 02:01:53.036285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.460 [2024-11-19 02:01:53.040661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.460 [2024-11-19 02:01:53.040758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.460 [2024-11-19 02:01:53.040780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.460 [2024-11-19 02:01:53.044987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.460 [2024-11-19 02:01:53.045111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.460 [2024-11-19 02:01:53.045131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.460 [2024-11-19 02:01:53.049317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.460 [2024-11-19 02:01:53.049436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.460 [2024-11-19 02:01:53.049456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.460 [2024-11-19 02:01:53.053770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.460 [2024-11-19 02:01:53.053866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.460 [2024-11-19 02:01:53.053886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.460 [2024-11-19 02:01:53.058149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.460 [2024-11-19 02:01:53.058249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.460 [2024-11-19 02:01:53.058284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.460 [2024-11-19 02:01:53.062644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.460 [2024-11-19 02:01:53.062771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.460 [2024-11-19 02:01:53.062791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.460 [2024-11-19 02:01:53.066973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.460 [2024-11-19 02:01:53.067070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.460 [2024-11-19 02:01:53.067090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.460 [2024-11-19 02:01:53.071634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.460 [2024-11-19 02:01:53.071744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.460 [2024-11-19 02:01:53.071764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.721 [2024-11-19 02:01:53.076455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.721 [2024-11-19 02:01:53.076584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.721 [2024-11-19 02:01:53.076618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.721 [2024-11-19 02:01:53.081288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.721 [2024-11-19 02:01:53.081395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.721 [2024-11-19 02:01:53.081416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.721 [2024-11-19 02:01:53.085804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.721 [2024-11-19 02:01:53.085898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.721 [2024-11-19 02:01:53.085918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.721 [2024-11-19 02:01:53.090529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.721 [2024-11-19 02:01:53.090645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.721 [2024-11-19 02:01:53.090665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.721 [2024-11-19 02:01:53.095665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.721 [2024-11-19 02:01:53.095958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.721 [2024-11-19 02:01:53.095984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.721 [2024-11-19 02:01:53.101721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.721 [2024-11-19 02:01:53.101831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.721 [2024-11-19 02:01:53.101853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.721 [2024-11-19 02:01:53.106619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.721 [2024-11-19 02:01:53.106801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.721 [2024-11-19 02:01:53.106823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.721 [2024-11-19 02:01:53.111639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.721 [2024-11-19 02:01:53.112031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.721 [2024-11-19 02:01:53.112100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.721 [2024-11-19 02:01:53.116300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.721 [2024-11-19 02:01:53.116673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.721 [2024-11-19 02:01:53.116721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.721 [2024-11-19 02:01:53.120976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.721 [2024-11-19 02:01:53.121307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.721 [2024-11-19 02:01:53.121344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.721 [2024-11-19 02:01:53.125473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.721 [2024-11-19 02:01:53.125820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.721 [2024-11-19 02:01:53.125862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.721 [2024-11-19 02:01:53.129931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.721 [2024-11-19 02:01:53.130296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.721 [2024-11-19 02:01:53.130337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.721 [2024-11-19 02:01:53.134647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.721 [2024-11-19 02:01:53.134970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.721 [2024-11-19 02:01:53.135002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.721 [2024-11-19 02:01:53.139113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.721 [2024-11-19 02:01:53.139446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.721 [2024-11-19 02:01:53.139482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.721 [2024-11-19 02:01:53.143761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.721 [2024-11-19 02:01:53.144103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.721 [2024-11-19 02:01:53.144139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.721 [2024-11-19 02:01:53.148257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.721 [2024-11-19 02:01:53.148603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.721 [2024-11-19 02:01:53.148651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.721 [2024-11-19 02:01:53.153137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.721 [2024-11-19 02:01:53.153478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.721 [2024-11-19 02:01:53.153521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.721 [2024-11-19 02:01:53.157827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.721 [2024-11-19 02:01:53.158205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.721 [2024-11-19 02:01:53.158241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.721 [2024-11-19 02:01:53.162666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.721 [2024-11-19 02:01:53.163022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.722 [2024-11-19 02:01:53.163059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.722 [2024-11-19 02:01:53.167263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.722 [2024-11-19 02:01:53.167618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.722 [2024-11-19 02:01:53.167682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.722 [2024-11-19 02:01:53.172021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.722 [2024-11-19 02:01:53.172353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.722 [2024-11-19 02:01:53.172385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.722 [2024-11-19 02:01:53.176627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.722 [2024-11-19 02:01:53.176961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.722 [2024-11-19 02:01:53.176995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.722 [2024-11-19 02:01:53.181228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.722 [2024-11-19 02:01:53.181553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.722 [2024-11-19 02:01:53.181595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.722 [2024-11-19 02:01:53.185850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.722 [2024-11-19 02:01:53.186192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.722 [2024-11-19 02:01:53.186230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.722 [2024-11-19 02:01:53.190529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.722 [2024-11-19 02:01:53.190868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.722 [2024-11-19 02:01:53.190903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.722 [2024-11-19 02:01:53.195064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.722 [2024-11-19 02:01:53.195390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.722 [2024-11-19 02:01:53.195423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.722 [2024-11-19 02:01:53.199791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.722 [2024-11-19 02:01:53.200131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.722 [2024-11-19 02:01:53.200170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.722 [2024-11-19 02:01:53.204461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.722 [2024-11-19 02:01:53.204806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.722 [2024-11-19 02:01:53.204845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.722 [2024-11-19 02:01:53.209115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.722 [2024-11-19 02:01:53.209464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.722 [2024-11-19 02:01:53.209511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.722 [2024-11-19 02:01:53.213735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.722 [2024-11-19 02:01:53.214086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.722 [2024-11-19 02:01:53.214124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.722 [2024-11-19 02:01:53.218222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.722 [2024-11-19 02:01:53.218626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.722 [2024-11-19 02:01:53.218684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.722 [2024-11-19 02:01:53.222885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.722 [2024-11-19 02:01:53.223213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.722 [2024-11-19 02:01:53.223252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.722 [2024-11-19 02:01:53.227438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.722 [2024-11-19 02:01:53.227782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.722 [2024-11-19 02:01:53.227814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.722 [2024-11-19 02:01:53.232099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.722 [2024-11-19 02:01:53.232427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.722 [2024-11-19 02:01:53.232464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.722 [2024-11-19 02:01:53.236818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.722 [2024-11-19 02:01:53.237143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.722 [2024-11-19 02:01:53.237186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.722 [2024-11-19 02:01:53.241285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.722 [2024-11-19 02:01:53.241633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.722 [2024-11-19 02:01:53.241673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.722 [2024-11-19 02:01:53.245755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.722 [2024-11-19 02:01:53.246107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.722 [2024-11-19 02:01:53.246145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.722 [2024-11-19 02:01:53.250442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.722 [2024-11-19 02:01:53.250812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.722 [2024-11-19 02:01:53.250848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.722 [2024-11-19 02:01:53.255023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.722 [2024-11-19 02:01:53.255354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.722 [2024-11-19 02:01:53.255391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.722 [2024-11-19 02:01:53.259589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.722 [2024-11-19 02:01:53.259919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.722 [2024-11-19 02:01:53.259956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.722 [2024-11-19 02:01:53.264154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.722 [2024-11-19 02:01:53.264481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.722 [2024-11-19 02:01:53.264530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.722 [2024-11-19 02:01:53.268631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.722 [2024-11-19 02:01:53.268969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.722 [2024-11-19 02:01:53.269007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.722 [2024-11-19 02:01:53.273068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.722 [2024-11-19 02:01:53.273408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.722 [2024-11-19 02:01:53.273444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.722 [2024-11-19 02:01:53.277610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.722 [2024-11-19 02:01:53.277971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.722 [2024-11-19 02:01:53.278008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.722 [2024-11-19 02:01:53.282003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.722 [2024-11-19 02:01:53.282319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.722 [2024-11-19 02:01:53.282370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.722 [2024-11-19 02:01:53.286541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.722 [2024-11-19 02:01:53.286891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.723 [2024-11-19 02:01:53.286928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.723 [2024-11-19 02:01:53.291122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.723 [2024-11-19 02:01:53.291448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.723 [2024-11-19 02:01:53.291487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.723 [2024-11-19 02:01:53.295777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.723 [2024-11-19 02:01:53.296101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.723 [2024-11-19 02:01:53.296134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.723 [2024-11-19 02:01:53.300388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.723 [2024-11-19 02:01:53.300726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.723 [2024-11-19 02:01:53.300764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.723 [2024-11-19 02:01:53.305042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.723 [2024-11-19 02:01:53.305371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.723 [2024-11-19 02:01:53.305409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.723 [2024-11-19 02:01:53.309767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.723 [2024-11-19 02:01:53.310142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.723 [2024-11-19 02:01:53.310181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.723 [2024-11-19 02:01:53.314505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.723 [2024-11-19 02:01:53.314898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.723 [2024-11-19 02:01:53.314957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.723 [2024-11-19 02:01:53.319097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.723 [2024-11-19 02:01:53.319430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.723 [2024-11-19 02:01:53.319463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.723 [2024-11-19 02:01:53.323804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.723 [2024-11-19 02:01:53.324140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.723 [2024-11-19 02:01:53.324177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.723 [2024-11-19 02:01:53.328305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.723 [2024-11-19 02:01:53.328665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.723 [2024-11-19 02:01:53.328700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.723 [2024-11-19 02:01:53.332998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.723 [2024-11-19 02:01:53.333406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.723 [2024-11-19 02:01:53.333441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.984 [2024-11-19 02:01:53.338279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.984 [2024-11-19 02:01:53.338693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.984 [2024-11-19 02:01:53.338772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.984 [2024-11-19 02:01:53.343301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.984 [2024-11-19 02:01:53.343729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.984 [2024-11-19 02:01:53.343784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.984 [2024-11-19 02:01:53.348174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.984 [2024-11-19 02:01:53.348508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.984 [2024-11-19 02:01:53.348555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.984 [2024-11-19 02:01:53.352674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.984 [2024-11-19 02:01:53.353013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.984 [2024-11-19 02:01:53.353050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.984 [2024-11-19 02:01:53.357161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.984 [2024-11-19 02:01:53.357498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.984 [2024-11-19 02:01:53.357544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.984 [2024-11-19 02:01:53.361844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.984 [2024-11-19 02:01:53.362225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.984 [2024-11-19 02:01:53.362266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.984 [2024-11-19 02:01:53.366525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.984 [2024-11-19 02:01:53.366863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.984 [2024-11-19 02:01:53.366901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.984 [2024-11-19 02:01:53.371141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.984 [2024-11-19 02:01:53.371476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.984 [2024-11-19 02:01:53.371524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.984 [2024-11-19 02:01:53.375656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.984 [2024-11-19 02:01:53.375991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.984 [2024-11-19 02:01:53.376024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.984 [2024-11-19 02:01:53.380271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.984 [2024-11-19 02:01:53.380610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.984 [2024-11-19 02:01:53.380647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.984 [2024-11-19 02:01:53.385003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.984 [2024-11-19 02:01:53.385328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.984 [2024-11-19 02:01:53.385365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.984 [2024-11-19 02:01:53.389384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.984 [2024-11-19 02:01:53.389730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.984 [2024-11-19 02:01:53.389767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.984 [2024-11-19 02:01:53.394014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.984 [2024-11-19 02:01:53.394350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.984 [2024-11-19 02:01:53.394387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.984 [2024-11-19 02:01:53.398690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.984 [2024-11-19 02:01:53.399024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.984 [2024-11-19 02:01:53.399061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.984 [2024-11-19 02:01:53.403208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.984 [2024-11-19 02:01:53.403533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.984 [2024-11-19 02:01:53.403575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.984 [2024-11-19 02:01:53.408299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.984 [2024-11-19 02:01:53.408645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.984 [2024-11-19 02:01:53.408683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.984 [2024-11-19 02:01:53.413189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.984 [2024-11-19 02:01:53.413522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.984 [2024-11-19 02:01:53.413577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.984 [2024-11-19 02:01:53.418410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.984 [2024-11-19 02:01:53.418753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.984 [2024-11-19 02:01:53.418791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.984 [2024-11-19 02:01:53.423781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.984 [2024-11-19 02:01:53.424127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.984 [2024-11-19 02:01:53.424165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.984 [2024-11-19 02:01:53.429132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.984 [2024-11-19 02:01:53.429484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.984 [2024-11-19 02:01:53.429552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.984 [2024-11-19 02:01:53.434405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.984 [2024-11-19 02:01:53.434782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.984 [2024-11-19 02:01:53.434822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.984 [2024-11-19 02:01:53.439366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.984 [2024-11-19 02:01:53.439716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.984 [2024-11-19 02:01:53.439763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.984 [2024-11-19 02:01:53.444283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.984 [2024-11-19 02:01:53.444629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.984 [2024-11-19 02:01:53.444678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.984 [2024-11-19 02:01:53.448966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.985 [2024-11-19 02:01:53.449298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.985 [2024-11-19 02:01:53.449333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.985 [2024-11-19 02:01:53.453444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.985 [2024-11-19 02:01:53.453812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.985 [2024-11-19 02:01:53.453852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.985 [2024-11-19 02:01:53.458248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.985 [2024-11-19 02:01:53.458632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.985 [2024-11-19 02:01:53.458694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.985 [2024-11-19 02:01:53.462983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.985 [2024-11-19 02:01:53.463315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.985 [2024-11-19 02:01:53.463362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.985 [2024-11-19 02:01:53.467750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.985 [2024-11-19 02:01:53.468102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.985 [2024-11-19 02:01:53.468138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.985 [2024-11-19 02:01:53.472366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.985 [2024-11-19 02:01:53.472703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.985 [2024-11-19 02:01:53.472735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.985 [2024-11-19 02:01:53.477191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.985 [2024-11-19 02:01:53.477516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.985 [2024-11-19 02:01:53.477557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.985 [2024-11-19 02:01:53.481867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.985 [2024-11-19 02:01:53.482212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.985 [2024-11-19 02:01:53.482251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.985 [2024-11-19 02:01:53.486322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.985 [2024-11-19 02:01:53.486689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.985 [2024-11-19 02:01:53.486737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.985 [2024-11-19 02:01:53.490831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.985 [2024-11-19 02:01:53.491164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.985 [2024-11-19 02:01:53.491197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.985 [2024-11-19 02:01:53.495335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.985 [2024-11-19 02:01:53.495681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.985 [2024-11-19 02:01:53.495716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.985 [2024-11-19 02:01:53.499950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.985 [2024-11-19 02:01:53.500285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.985 [2024-11-19 02:01:53.500317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.985 [2024-11-19 02:01:53.504474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.985 [2024-11-19 02:01:53.504824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.985 [2024-11-19 02:01:53.504862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.985 [2024-11-19 02:01:53.508887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.985 [2024-11-19 02:01:53.509219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.985 [2024-11-19 02:01:53.509259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.985 [2024-11-19 02:01:53.513477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.985 [2024-11-19 02:01:53.513859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.985 [2024-11-19 02:01:53.513892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.985 [2024-11-19 02:01:53.518071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.985 [2024-11-19 02:01:53.518439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.985 [2024-11-19 02:01:53.518475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.985 [2024-11-19 02:01:53.522846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.985 [2024-11-19 02:01:53.523164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.985 [2024-11-19 02:01:53.523196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.985 [2024-11-19 02:01:53.527368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.985 [2024-11-19 02:01:53.527713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.985 [2024-11-19 02:01:53.527754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.985 [2024-11-19 02:01:53.532057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.985 [2024-11-19 02:01:53.532383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.985 [2024-11-19 02:01:53.532415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.985 [2024-11-19 02:01:53.536621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.985 [2024-11-19 02:01:53.536947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.985 [2024-11-19 02:01:53.536993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.985 [2024-11-19 02:01:53.541089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.985 [2024-11-19 02:01:53.541422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.985 [2024-11-19 02:01:53.541465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.985 [2024-11-19 02:01:53.545557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.985 [2024-11-19 02:01:53.545879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.985 [2024-11-19 02:01:53.545916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.985 [2024-11-19 02:01:53.550108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.985 [2024-11-19 02:01:53.550411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.985 [2024-11-19 02:01:53.550448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.985 [2024-11-19 02:01:53.554773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.985 [2024-11-19 02:01:53.555097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.985 [2024-11-19 02:01:53.555131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.985 [2024-11-19 02:01:53.559335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.985 [2024-11-19 02:01:53.559680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.985 [2024-11-19 02:01:53.559712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.985 [2024-11-19 02:01:53.563869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.985 [2024-11-19 02:01:53.564202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.985 [2024-11-19 02:01:53.564237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.985 [2024-11-19 02:01:53.568405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.985 [2024-11-19 02:01:53.568749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.985 [2024-11-19 02:01:53.568790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.985 [2024-11-19 02:01:53.572884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.985 [2024-11-19 02:01:53.573216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.986 [2024-11-19 02:01:53.573250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.986 [2024-11-19 02:01:53.577402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.986 [2024-11-19 02:01:53.577720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.986 [2024-11-19 02:01:53.577767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:42.986 [2024-11-19 02:01:53.581997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.986 [2024-11-19 02:01:53.582305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.986 [2024-11-19 02:01:53.582357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:42.986 [2024-11-19 02:01:53.586647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.986 [2024-11-19 02:01:53.586974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.986 [2024-11-19 02:01:53.587014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:42.986 [2024-11-19 02:01:53.591251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.986 [2024-11-19 02:01:53.591589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.986 [2024-11-19 02:01:53.591627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:42.986 [2024-11-19 02:01:53.595908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:42.986 [2024-11-19 02:01:53.596290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:42.986 [2024-11-19 02:01:53.596330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.246 [2024-11-19 02:01:53.600881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.246 [2024-11-19 02:01:53.601245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.246 [2024-11-19 02:01:53.601284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.246 [2024-11-19 02:01:53.605572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.246 [2024-11-19 02:01:53.605927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.246 [2024-11-19 02:01:53.605991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.246 [2024-11-19 02:01:53.610350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.247 [2024-11-19 02:01:53.610680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.247 [2024-11-19 02:01:53.610727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.247 [2024-11-19 02:01:53.614924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.247 [2024-11-19 02:01:53.615258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.247 [2024-11-19 02:01:53.615299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.247 [2024-11-19 02:01:53.619564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.247 [2024-11-19 02:01:53.619897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.247 [2024-11-19 02:01:53.619929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.247 [2024-11-19 02:01:53.624087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.247 [2024-11-19 02:01:53.624422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.247 [2024-11-19 02:01:53.624465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.247 [2024-11-19 02:01:53.628698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.247 [2024-11-19 02:01:53.629031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.247 [2024-11-19 02:01:53.629065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.247 [2024-11-19 02:01:53.633339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.247 [2024-11-19 02:01:53.633677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.247 [2024-11-19 02:01:53.633710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.247 [2024-11-19 02:01:53.638083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.247 [2024-11-19 02:01:53.638391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.247 [2024-11-19 02:01:53.638428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.247 [2024-11-19 02:01:53.642763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.247 [2024-11-19 02:01:53.643096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.247 [2024-11-19 02:01:53.643140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.247 [2024-11-19 02:01:53.647248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.247 [2024-11-19 02:01:53.647581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.247 [2024-11-19 02:01:53.647639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.247 [2024-11-19 02:01:53.651918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.247 [2024-11-19 02:01:53.652251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.247 [2024-11-19 02:01:53.652283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.247 [2024-11-19 02:01:53.656438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.247 [2024-11-19 02:01:53.656795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.247 [2024-11-19 02:01:53.656831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.247 [2024-11-19 02:01:53.660933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.247 [2024-11-19 02:01:53.661268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.247 [2024-11-19 02:01:53.661309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.247 [2024-11-19 02:01:53.665832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.247 [2024-11-19 02:01:53.666215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.247 [2024-11-19 02:01:53.666267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.247 [2024-11-19 02:01:53.670720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.247 [2024-11-19 02:01:53.671053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.247 [2024-11-19 02:01:53.671089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.247 [2024-11-19 02:01:53.675599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.247 [2024-11-19 02:01:53.675927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.247 [2024-11-19 02:01:53.675963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.247 [2024-11-19 02:01:53.680319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.247 [2024-11-19 02:01:53.680675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.247 [2024-11-19 02:01:53.680709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.247 [2024-11-19 02:01:53.684958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.247 [2024-11-19 02:01:53.685291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.247 [2024-11-19 02:01:53.685324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.247 [2024-11-19 02:01:53.689653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.247 [2024-11-19 02:01:53.690005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.247 [2024-11-19 02:01:53.690038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.247 [2024-11-19 02:01:53.694317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.247 [2024-11-19 02:01:53.694683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.247 [2024-11-19 02:01:53.694730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.247 [2024-11-19 02:01:53.698959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.247 [2024-11-19 02:01:53.699296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.247 [2024-11-19 02:01:53.699330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.247 [2024-11-19 02:01:53.703511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.247 [2024-11-19 02:01:53.703837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.247 [2024-11-19 02:01:53.703881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.247 [2024-11-19 02:01:53.708207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.247 [2024-11-19 02:01:53.708532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.247 [2024-11-19 02:01:53.708572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.247 [2024-11-19 02:01:53.712931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.247 [2024-11-19 02:01:53.713256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.247 [2024-11-19 02:01:53.713290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.247 [2024-11-19 02:01:53.717561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.247 [2024-11-19 02:01:53.717891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.247 [2024-11-19 02:01:53.717928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.247 [2024-11-19 02:01:53.722070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.247 [2024-11-19 02:01:53.722415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.247 [2024-11-19 02:01:53.722452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.247 [2024-11-19 02:01:53.726768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.247 [2024-11-19 02:01:53.727103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.247 [2024-11-19 02:01:53.727135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.247 [2024-11-19 02:01:53.731242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.247 [2024-11-19 02:01:53.731588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.247 [2024-11-19 02:01:53.731619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.247 [2024-11-19 02:01:53.735824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.248 [2024-11-19 02:01:53.736157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.248 [2024-11-19 02:01:53.736199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.248 [2024-11-19 02:01:53.740440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.248 [2024-11-19 02:01:53.740797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.248 [2024-11-19 02:01:53.740833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.248 [2024-11-19 02:01:53.744911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.248 [2024-11-19 02:01:53.745245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.248 [2024-11-19 02:01:53.745281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.248 [2024-11-19 02:01:53.749493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.248 [2024-11-19 02:01:53.749840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.248 [2024-11-19 02:01:53.749873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.248 [2024-11-19 02:01:53.754253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.248 [2024-11-19 02:01:53.754630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.248 [2024-11-19 02:01:53.754693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.248 [2024-11-19 02:01:53.758990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.248 [2024-11-19 02:01:53.759331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.248 [2024-11-19 02:01:53.759363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.248 [2024-11-19 02:01:53.763628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.248 [2024-11-19 02:01:53.763953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.248 [2024-11-19 02:01:53.763985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.248 [2024-11-19 02:01:53.768340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.248 [2024-11-19 02:01:53.768677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.248 [2024-11-19 02:01:53.768711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.248 [2024-11-19 02:01:53.772912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.248 [2024-11-19 02:01:53.773247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.248 [2024-11-19 02:01:53.773290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.248 [2024-11-19 02:01:53.777291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.248 [2024-11-19 02:01:53.777655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.248 [2024-11-19 02:01:53.777687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.248 [2024-11-19 02:01:53.781931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.248 [2024-11-19 02:01:53.782287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.248 [2024-11-19 02:01:53.782325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.248 [2024-11-19 02:01:53.786567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.248 [2024-11-19 02:01:53.786905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.248 [2024-11-19 02:01:53.786939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.248 [2024-11-19 02:01:53.791137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.248 [2024-11-19 02:01:53.791462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.248 [2024-11-19 02:01:53.791510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.248 [2024-11-19 02:01:53.795841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.248 [2024-11-19 02:01:53.796167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.248 [2024-11-19 02:01:53.796200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.248 [2024-11-19 02:01:53.800410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.248 [2024-11-19 02:01:53.800746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.248 [2024-11-19 02:01:53.800792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.248 [2024-11-19 02:01:53.805025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.248 [2024-11-19 02:01:53.805353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.248 [2024-11-19 02:01:53.805387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.248 [2024-11-19 02:01:53.809595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.248 [2024-11-19 02:01:53.809918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.248 [2024-11-19 02:01:53.809975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.248 [2024-11-19 02:01:53.814277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.248 [2024-11-19 02:01:53.814589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.248 [2024-11-19 02:01:53.814666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.248 [2024-11-19 02:01:53.818982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.248 [2024-11-19 02:01:53.819316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.248 [2024-11-19 02:01:53.819349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.248 6622.00 IOPS, 827.75 MiB/s [2024-11-19T02:01:53.863Z] [2024-11-19 02:01:53.824659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.248 [2024-11-19 02:01:53.824984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.248 [2024-11-19 02:01:53.825021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.248 [2024-11-19 02:01:53.829052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.248 [2024-11-19 02:01:53.829386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.248 [2024-11-19 02:01:53.829426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.248 [2024-11-19 02:01:53.833707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.248 [2024-11-19 02:01:53.834037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.248 [2024-11-19 02:01:53.834072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.248 [2024-11-19 02:01:53.838403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.248 [2024-11-19 02:01:53.838739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.248 [2024-11-19 02:01:53.838777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.248 [2024-11-19 02:01:53.842914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.248 [2024-11-19 02:01:53.843271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.248 [2024-11-19 02:01:53.843307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.248 [2024-11-19 02:01:53.847395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.248 [2024-11-19 02:01:53.847738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.248 [2024-11-19 02:01:53.847771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.248 [2024-11-19 02:01:53.851924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.248 [2024-11-19 02:01:53.852265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.248 [2024-11-19 02:01:53.852310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.248 [2024-11-19 02:01:53.856434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.248 [2024-11-19 02:01:53.856796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.249 [2024-11-19 02:01:53.856833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.249 [2024-11-19 02:01:53.861310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.249 [2024-11-19 02:01:53.861669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.249 [2024-11-19 02:01:53.861738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.509 [2024-11-19 02:01:53.866315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.509 [2024-11-19 02:01:53.866648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.509 [2024-11-19 02:01:53.866746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.509 [2024-11-19 02:01:53.871409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.509 [2024-11-19 02:01:53.871746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.509 [2024-11-19 02:01:53.871777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.509 [2024-11-19 02:01:53.876057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.509 [2024-11-19 02:01:53.876405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.509 [2024-11-19 02:01:53.876447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.509 [2024-11-19 02:01:53.880803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.509 [2024-11-19 02:01:53.881135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.509 [2024-11-19 02:01:53.881169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.509 [2024-11-19 02:01:53.885446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.509 [2024-11-19 02:01:53.885783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.509 [2024-11-19 02:01:53.885817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.509 [2024-11-19 02:01:53.890121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.509 [2024-11-19 02:01:53.890504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.509 [2024-11-19 02:01:53.890555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.509 [2024-11-19 02:01:53.894772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.509 [2024-11-19 02:01:53.895105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.509 [2024-11-19 02:01:53.895138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.509 [2024-11-19 02:01:53.899275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.509 [2024-11-19 02:01:53.899636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.509 [2024-11-19 02:01:53.899671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.509 [2024-11-19 02:01:53.903868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.509 [2024-11-19 02:01:53.904202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.509 [2024-11-19 02:01:53.904234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.509 [2024-11-19 02:01:53.908470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.509 [2024-11-19 02:01:53.908836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.509 [2024-11-19 02:01:53.908873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.509 [2024-11-19 02:01:53.913442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.509 [2024-11-19 02:01:53.913810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.509 [2024-11-19 02:01:53.913847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.509 [2024-11-19 02:01:53.918653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.509 [2024-11-19 02:01:53.919010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.509 [2024-11-19 02:01:53.919047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.509 [2024-11-19 02:01:53.923669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.509 [2024-11-19 02:01:53.923994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.509 [2024-11-19 02:01:53.924037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.509 [2024-11-19 02:01:53.928688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.509 [2024-11-19 02:01:53.929050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.509 [2024-11-19 02:01:53.929091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.509 [2024-11-19 02:01:53.933628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.509 [2024-11-19 02:01:53.933969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.509 [2024-11-19 02:01:53.934012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.509 [2024-11-19 02:01:53.938886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.509 [2024-11-19 02:01:53.939219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.509 [2024-11-19 02:01:53.939254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.509 [2024-11-19 02:01:53.944004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.509 [2024-11-19 02:01:53.944337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.510 [2024-11-19 02:01:53.944375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.510 [2024-11-19 02:01:53.948872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.510 [2024-11-19 02:01:53.949206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.510 [2024-11-19 02:01:53.949243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.510 [2024-11-19 02:01:53.953753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.510 [2024-11-19 02:01:53.954126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.510 [2024-11-19 02:01:53.954166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.510 [2024-11-19 02:01:53.958642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.510 [2024-11-19 02:01:53.958967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.510 [2024-11-19 02:01:53.959012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.510 [2024-11-19 02:01:53.963369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.510 [2024-11-19 02:01:53.963724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.510 [2024-11-19 02:01:53.963757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.510 [2024-11-19 02:01:53.968045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.510 [2024-11-19 02:01:53.968409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.510 [2024-11-19 02:01:53.968449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.510 [2024-11-19 02:01:53.972923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.510 [2024-11-19 02:01:53.973255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.510 [2024-11-19 02:01:53.973293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.510 [2024-11-19 02:01:53.977667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.510 [2024-11-19 02:01:53.978014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.510 [2024-11-19 02:01:53.978054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.510 [2024-11-19 02:01:53.982540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.510 [2024-11-19 02:01:53.982897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.510 [2024-11-19 02:01:53.982934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.510 [2024-11-19 02:01:53.987264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.510 [2024-11-19 02:01:53.987610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.510 [2024-11-19 02:01:53.987642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.510 [2024-11-19 02:01:53.991977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.510 [2024-11-19 02:01:53.992311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.510 [2024-11-19 02:01:53.992347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.510 [2024-11-19 02:01:53.997189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.510 [2024-11-19 02:01:53.997530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.510 [2024-11-19 02:01:53.997596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.510 [2024-11-19 02:01:54.002109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.510 [2024-11-19 02:01:54.002468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.510 [2024-11-19 02:01:54.002519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.510 [2024-11-19 02:01:54.006925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.510 [2024-11-19 02:01:54.007259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.510 [2024-11-19 02:01:54.007304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.510 [2024-11-19 02:01:54.011693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.510 [2024-11-19 02:01:54.012035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.510 [2024-11-19 02:01:54.012074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.510 [2024-11-19 02:01:54.016403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.510 [2024-11-19 02:01:54.016756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.510 [2024-11-19 02:01:54.016788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.510 [2024-11-19 02:01:54.021346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.510 [2024-11-19 02:01:54.021698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.510 [2024-11-19 02:01:54.021741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.510 [2024-11-19 02:01:54.026201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.510 [2024-11-19 02:01:54.026567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.510 [2024-11-19 02:01:54.026618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.510 [2024-11-19 02:01:54.031034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.510 [2024-11-19 02:01:54.031370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.510 [2024-11-19 02:01:54.031408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.510 [2024-11-19 02:01:54.035812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.510 [2024-11-19 02:01:54.036145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.510 [2024-11-19 02:01:54.036179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.510 [2024-11-19 02:01:54.040601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.510 [2024-11-19 02:01:54.040943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.510 [2024-11-19 02:01:54.040987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.510 [2024-11-19 02:01:54.045280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.510 [2024-11-19 02:01:54.045635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.510 [2024-11-19 02:01:54.045667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.510 [2024-11-19 02:01:54.049928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.510 [2024-11-19 02:01:54.050291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.510 [2024-11-19 02:01:54.050329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.510 [2024-11-19 02:01:54.054645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.510 [2024-11-19 02:01:54.054981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.510 [2024-11-19 02:01:54.055015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.510 [2024-11-19 02:01:54.059463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.510 [2024-11-19 02:01:54.059816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.510 [2024-11-19 02:01:54.059854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.510 [2024-11-19 02:01:54.064104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.510 [2024-11-19 02:01:54.064444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.510 [2024-11-19 02:01:54.064490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.510 [2024-11-19 02:01:54.068882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.510 [2024-11-19 02:01:54.069231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.510 [2024-11-19 02:01:54.069272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.510 [2024-11-19 02:01:54.073491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.510 [2024-11-19 02:01:54.073844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.511 [2024-11-19 02:01:54.073876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.511 [2024-11-19 02:01:54.078414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.511 [2024-11-19 02:01:54.078768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.511 [2024-11-19 02:01:54.078819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.511 [2024-11-19 02:01:54.083080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.511 [2024-11-19 02:01:54.083426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.511 [2024-11-19 02:01:54.083465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.511 [2024-11-19 02:01:54.087801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.511 [2024-11-19 02:01:54.088134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.511 [2024-11-19 02:01:54.088177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.511 [2024-11-19 02:01:54.092520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.511 [2024-11-19 02:01:54.092861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.511 [2024-11-19 02:01:54.092902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.511 [2024-11-19 02:01:54.097392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.511 [2024-11-19 02:01:54.097745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.511 [2024-11-19 02:01:54.097789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.511 [2024-11-19 02:01:54.102110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.511 [2024-11-19 02:01:54.102487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.511 [2024-11-19 02:01:54.102534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.511 [2024-11-19 02:01:54.106814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.511 [2024-11-19 02:01:54.107156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.511 [2024-11-19 02:01:54.107196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.511 [2024-11-19 02:01:54.111627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.511 [2024-11-19 02:01:54.111968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.511 [2024-11-19 02:01:54.112009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.511 [2024-11-19 02:01:54.116402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.511 [2024-11-19 02:01:54.116756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.511 [2024-11-19 02:01:54.116794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.511 [2024-11-19 02:01:54.121117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.511 [2024-11-19 02:01:54.121508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.511 [2024-11-19 02:01:54.121571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.771 [2024-11-19 02:01:54.126235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.771 [2024-11-19 02:01:54.126566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.771 [2024-11-19 02:01:54.126605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.771 [2024-11-19 02:01:54.131120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.771 [2024-11-19 02:01:54.131497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.771 [2024-11-19 02:01:54.131563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.771 [2024-11-19 02:01:54.136103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.771 [2024-11-19 02:01:54.136429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.771 [2024-11-19 02:01:54.136464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.771 [2024-11-19 02:01:54.140961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.771 [2024-11-19 02:01:54.141294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.771 [2024-11-19 02:01:54.141329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.771 [2024-11-19 02:01:54.145579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.771 [2024-11-19 02:01:54.145892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.771 [2024-11-19 02:01:54.145929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.771 [2024-11-19 02:01:54.150214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.772 [2024-11-19 02:01:54.150559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.772 [2024-11-19 02:01:54.150630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.772 [2024-11-19 02:01:54.154742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.772 [2024-11-19 02:01:54.155076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.772 [2024-11-19 02:01:54.155110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.772 [2024-11-19 02:01:54.159159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.772 [2024-11-19 02:01:54.159493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.772 [2024-11-19 02:01:54.159535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.772 [2024-11-19 02:01:54.163696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.772 [2024-11-19 02:01:54.164030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.772 [2024-11-19 02:01:54.164063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.772 [2024-11-19 02:01:54.168392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.772 [2024-11-19 02:01:54.168729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.772 [2024-11-19 02:01:54.168763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.772 [2024-11-19 02:01:54.173096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.772 [2024-11-19 02:01:54.173422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.772 [2024-11-19 02:01:54.173456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.772 [2024-11-19 02:01:54.178321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.772 [2024-11-19 02:01:54.178730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.772 [2024-11-19 02:01:54.178772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.772 [2024-11-19 02:01:54.183089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.772 [2024-11-19 02:01:54.183422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.772 [2024-11-19 02:01:54.183462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.772 [2024-11-19 02:01:54.187618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.772 [2024-11-19 02:01:54.187958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.772 [2024-11-19 02:01:54.187997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.772 [2024-11-19 02:01:54.192153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.772 [2024-11-19 02:01:54.192492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.772 [2024-11-19 02:01:54.192545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.772 [2024-11-19 02:01:54.196691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.772 [2024-11-19 02:01:54.197029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.772 [2024-11-19 02:01:54.197066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.772 [2024-11-19 02:01:54.201186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.772 [2024-11-19 02:01:54.201523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.772 [2024-11-19 02:01:54.201568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.772 [2024-11-19 02:01:54.205793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.772 [2024-11-19 02:01:54.206134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.772 [2024-11-19 02:01:54.206173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.772 [2024-11-19 02:01:54.210431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.772 [2024-11-19 02:01:54.210783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.772 [2024-11-19 02:01:54.210820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.772 [2024-11-19 02:01:54.215110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.772 [2024-11-19 02:01:54.215435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.772 [2024-11-19 02:01:54.215467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.772 [2024-11-19 02:01:54.219720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.772 [2024-11-19 02:01:54.220053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.772 [2024-11-19 02:01:54.220085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.772 [2024-11-19 02:01:54.224125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.772 [2024-11-19 02:01:54.224458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.772 [2024-11-19 02:01:54.224490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.772 [2024-11-19 02:01:54.228651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.772 [2024-11-19 02:01:54.228988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.772 [2024-11-19 02:01:54.229030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.772 [2024-11-19 02:01:54.233103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.772 [2024-11-19 02:01:54.233435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.772 [2024-11-19 02:01:54.233479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.772 [2024-11-19 02:01:54.237557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.772 [2024-11-19 02:01:54.237875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.772 [2024-11-19 02:01:54.237911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.772 [2024-11-19 02:01:54.242174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.772 [2024-11-19 02:01:54.242531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.772 [2024-11-19 02:01:54.242582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.772 [2024-11-19 02:01:54.246872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.772 [2024-11-19 02:01:54.247199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.772 [2024-11-19 02:01:54.247233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.772 [2024-11-19 02:01:54.251271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.772 [2024-11-19 02:01:54.251637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.772 [2024-11-19 02:01:54.251669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.772 [2024-11-19 02:01:54.255896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.772 [2024-11-19 02:01:54.256220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.772 [2024-11-19 02:01:54.256253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.772 [2024-11-19 02:01:54.260388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.772 [2024-11-19 02:01:54.260735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.772 [2024-11-19 02:01:54.260771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.772 [2024-11-19 02:01:54.264925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.772 [2024-11-19 02:01:54.265262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.772 [2024-11-19 02:01:54.265304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.772 [2024-11-19 02:01:54.269433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.773 [2024-11-19 02:01:54.269789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-11-19 02:01:54.269821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.773 [2024-11-19 02:01:54.274042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.773 [2024-11-19 02:01:54.274377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-11-19 02:01:54.274413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.773 [2024-11-19 02:01:54.278705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.773 [2024-11-19 02:01:54.279038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-11-19 02:01:54.279073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.773 [2024-11-19 02:01:54.283199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.773 [2024-11-19 02:01:54.283535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-11-19 02:01:54.283570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.773 [2024-11-19 02:01:54.287884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.773 [2024-11-19 02:01:54.288250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-11-19 02:01:54.288293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.773 [2024-11-19 02:01:54.292563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.773 [2024-11-19 02:01:54.292905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-11-19 02:01:54.292949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.773 [2024-11-19 02:01:54.297237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.773 [2024-11-19 02:01:54.297593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-11-19 02:01:54.297623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.773 [2024-11-19 02:01:54.301805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.773 [2024-11-19 02:01:54.302148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-11-19 02:01:54.302186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.773 [2024-11-19 02:01:54.306468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.773 [2024-11-19 02:01:54.306812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-11-19 02:01:54.306849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.773 [2024-11-19 02:01:54.311058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.773 [2024-11-19 02:01:54.311383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-11-19 02:01:54.311417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.773 [2024-11-19 02:01:54.315752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.773 [2024-11-19 02:01:54.316076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-11-19 02:01:54.316110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.773 [2024-11-19 02:01:54.320472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.773 [2024-11-19 02:01:54.320833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-11-19 02:01:54.320870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.773 [2024-11-19 02:01:54.325104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.773 [2024-11-19 02:01:54.325432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-11-19 02:01:54.325466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.773 [2024-11-19 02:01:54.329728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.773 [2024-11-19 02:01:54.330089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-11-19 02:01:54.330127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.773 [2024-11-19 02:01:54.334423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.773 [2024-11-19 02:01:54.334787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-11-19 02:01:54.334824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.773 [2024-11-19 02:01:54.338941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.773 [2024-11-19 02:01:54.339273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-11-19 02:01:54.339306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.773 [2024-11-19 02:01:54.343589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.773 [2024-11-19 02:01:54.343913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-11-19 02:01:54.343949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.773 [2024-11-19 02:01:54.348157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.773 [2024-11-19 02:01:54.348490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-11-19 02:01:54.348545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.773 [2024-11-19 02:01:54.352732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.773 [2024-11-19 02:01:54.353065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-11-19 02:01:54.353099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.773 [2024-11-19 02:01:54.357294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.773 [2024-11-19 02:01:54.357633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-11-19 02:01:54.357680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.773 [2024-11-19 02:01:54.362000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.773 [2024-11-19 02:01:54.362356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-11-19 02:01:54.362393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.773 [2024-11-19 02:01:54.366635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.773 [2024-11-19 02:01:54.366958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-11-19 02:01:54.366992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:43.773 [2024-11-19 02:01:54.371165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.773 [2024-11-19 02:01:54.371490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-11-19 02:01:54.371532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:43.773 [2024-11-19 02:01:54.375789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.773 [2024-11-19 02:01:54.376117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-11-19 02:01:54.376150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:43.773 [2024-11-19 02:01:54.380524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.773 [2024-11-19 02:01:54.380851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.773 [2024-11-19 02:01:54.380884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:43.773 [2024-11-19 02:01:54.385317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:43.773 [2024-11-19 02:01:54.385688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:43.774 [2024-11-19 02:01:54.385725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.034 [2024-11-19 02:01:54.390472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.034 [2024-11-19 02:01:54.390822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.034 [2024-11-19 02:01:54.390858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.034 [2024-11-19 02:01:54.395249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.034 [2024-11-19 02:01:54.395589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.034 [2024-11-19 02:01:54.395654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.034 [2024-11-19 02:01:54.399926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.034 [2024-11-19 02:01:54.400261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.034 [2024-11-19 02:01:54.400296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.034 [2024-11-19 02:01:54.404404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.034 [2024-11-19 02:01:54.404752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.034 [2024-11-19 02:01:54.404788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.034 [2024-11-19 02:01:54.408913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.034 [2024-11-19 02:01:54.409250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.034 [2024-11-19 02:01:54.409288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.034 [2024-11-19 02:01:54.413369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.034 [2024-11-19 02:01:54.413713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.034 [2024-11-19 02:01:54.413746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.034 [2024-11-19 02:01:54.417921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.034 [2024-11-19 02:01:54.418283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.034 [2024-11-19 02:01:54.418336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.034 [2024-11-19 02:01:54.422661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.034 [2024-11-19 02:01:54.422985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.034 [2024-11-19 02:01:54.423020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.034 [2024-11-19 02:01:54.427189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.034 [2024-11-19 02:01:54.427522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.034 [2024-11-19 02:01:54.427563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.034 [2024-11-19 02:01:54.432120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.034 [2024-11-19 02:01:54.432443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.034 [2024-11-19 02:01:54.432476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.034 [2024-11-19 02:01:54.437054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.034 [2024-11-19 02:01:54.437376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.034 [2024-11-19 02:01:54.437407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.034 [2024-11-19 02:01:54.442304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.034 [2024-11-19 02:01:54.442657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.034 [2024-11-19 02:01:54.442693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.034 [2024-11-19 02:01:54.447628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.034 [2024-11-19 02:01:54.447969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.034 [2024-11-19 02:01:54.448008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.034 [2024-11-19 02:01:54.453071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.034 [2024-11-19 02:01:54.453406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.034 [2024-11-19 02:01:54.453444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.034 [2024-11-19 02:01:54.458392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.034 [2024-11-19 02:01:54.458738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.034 [2024-11-19 02:01:54.458772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.034 [2024-11-19 02:01:54.463357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.034 [2024-11-19 02:01:54.463711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.034 [2024-11-19 02:01:54.463743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.034 [2024-11-19 02:01:54.468415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.034 [2024-11-19 02:01:54.468779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.034 [2024-11-19 02:01:54.468820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.034 [2024-11-19 02:01:54.473290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.034 [2024-11-19 02:01:54.473634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.034 [2024-11-19 02:01:54.473665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.034 [2024-11-19 02:01:54.477867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.034 [2024-11-19 02:01:54.478230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.035 [2024-11-19 02:01:54.478270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.035 [2024-11-19 02:01:54.482505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.035 [2024-11-19 02:01:54.482869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.035 [2024-11-19 02:01:54.482905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.035 [2024-11-19 02:01:54.487075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.035 [2024-11-19 02:01:54.487413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.035 [2024-11-19 02:01:54.487445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.035 [2024-11-19 02:01:54.491778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.035 [2024-11-19 02:01:54.492158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.035 [2024-11-19 02:01:54.492194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.035 [2024-11-19 02:01:54.496456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.035 [2024-11-19 02:01:54.496808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.035 [2024-11-19 02:01:54.496846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.035 [2024-11-19 02:01:54.501055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.035 [2024-11-19 02:01:54.501393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.035 [2024-11-19 02:01:54.501439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.035 [2024-11-19 02:01:54.505493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.035 [2024-11-19 02:01:54.505839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.035 [2024-11-19 02:01:54.505870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.035 [2024-11-19 02:01:54.510014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.035 [2024-11-19 02:01:54.510336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.035 [2024-11-19 02:01:54.510388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.035 [2024-11-19 02:01:54.514584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.035 [2024-11-19 02:01:54.514961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.035 [2024-11-19 02:01:54.515014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.035 [2024-11-19 02:01:54.519160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.035 [2024-11-19 02:01:54.519494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.035 [2024-11-19 02:01:54.519543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.035 [2024-11-19 02:01:54.523924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.035 [2024-11-19 02:01:54.524248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.035 [2024-11-19 02:01:54.524283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.035 [2024-11-19 02:01:54.528445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.035 [2024-11-19 02:01:54.528794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.035 [2024-11-19 02:01:54.528832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.035 [2024-11-19 02:01:54.532964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.035 [2024-11-19 02:01:54.533300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.035 [2024-11-19 02:01:54.533344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.035 [2024-11-19 02:01:54.537412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.035 [2024-11-19 02:01:54.537757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.035 [2024-11-19 02:01:54.537794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.035 [2024-11-19 02:01:54.541888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.035 [2024-11-19 02:01:54.542253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.035 [2024-11-19 02:01:54.542292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.035 [2024-11-19 02:01:54.546532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.035 [2024-11-19 02:01:54.546886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.035 [2024-11-19 02:01:54.546923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.035 [2024-11-19 02:01:54.551126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.035 [2024-11-19 02:01:54.551461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.035 [2024-11-19 02:01:54.551495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.035 [2024-11-19 02:01:54.555682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.035 [2024-11-19 02:01:54.556015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.035 [2024-11-19 02:01:54.556048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.035 [2024-11-19 02:01:54.560253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.035 [2024-11-19 02:01:54.560611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.035 [2024-11-19 02:01:54.560645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.035 [2024-11-19 02:01:54.564813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.035 [2024-11-19 02:01:54.565149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.035 [2024-11-19 02:01:54.565186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.035 [2024-11-19 02:01:54.569288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.035 [2024-11-19 02:01:54.569649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.035 [2024-11-19 02:01:54.569682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.035 [2024-11-19 02:01:54.573780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.035 [2024-11-19 02:01:54.574151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.035 [2024-11-19 02:01:54.574190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.035 [2024-11-19 02:01:54.578509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.035 [2024-11-19 02:01:54.578856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.035 [2024-11-19 02:01:54.578893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.035 [2024-11-19 02:01:54.582919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.035 [2024-11-19 02:01:54.583253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.035 [2024-11-19 02:01:54.583298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.035 [2024-11-19 02:01:54.587422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.035 [2024-11-19 02:01:54.587772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.035 [2024-11-19 02:01:54.587808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.035 [2024-11-19 02:01:54.591892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.035 [2024-11-19 02:01:54.592233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.035 [2024-11-19 02:01:54.592265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.036 [2024-11-19 02:01:54.596342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.036 [2024-11-19 02:01:54.596692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.036 [2024-11-19 02:01:54.596722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.036 [2024-11-19 02:01:54.601045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.036 [2024-11-19 02:01:54.601380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.036 [2024-11-19 02:01:54.601424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.036 [2024-11-19 02:01:54.605594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.036 [2024-11-19 02:01:54.605906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.036 [2024-11-19 02:01:54.605950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.036 [2024-11-19 02:01:54.610181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.036 [2024-11-19 02:01:54.610511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.036 [2024-11-19 02:01:54.610559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.036 [2024-11-19 02:01:54.614733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.036 [2024-11-19 02:01:54.615080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.036 [2024-11-19 02:01:54.615112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.036 [2024-11-19 02:01:54.619512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.036 [2024-11-19 02:01:54.619837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.036 [2024-11-19 02:01:54.619873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.036 [2024-11-19 02:01:54.624021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.036 [2024-11-19 02:01:54.624354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.036 [2024-11-19 02:01:54.624388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.036 [2024-11-19 02:01:54.628536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.036 [2024-11-19 02:01:54.628868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.036 [2024-11-19 02:01:54.628903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.036 [2024-11-19 02:01:54.633109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.036 [2024-11-19 02:01:54.633447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.036 [2024-11-19 02:01:54.633482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.036 [2024-11-19 02:01:54.637728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.036 [2024-11-19 02:01:54.638069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.036 [2024-11-19 02:01:54.638106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.036 [2024-11-19 02:01:54.642355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.036 [2024-11-19 02:01:54.642702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.036 [2024-11-19 02:01:54.642734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.036 [2024-11-19 02:01:54.647133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.036 [2024-11-19 02:01:54.647475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.036 [2024-11-19 02:01:54.647516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.296 [2024-11-19 02:01:54.652071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.296 [2024-11-19 02:01:54.652403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.296 [2024-11-19 02:01:54.652436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.296 [2024-11-19 02:01:54.657036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.296 [2024-11-19 02:01:54.657391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.296 [2024-11-19 02:01:54.657428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.296 [2024-11-19 02:01:54.661691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.296 [2024-11-19 02:01:54.662056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.296 [2024-11-19 02:01:54.662090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.296 [2024-11-19 02:01:54.666363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.296 [2024-11-19 02:01:54.666698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.296 [2024-11-19 02:01:54.666732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.296 [2024-11-19 02:01:54.671155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.296 [2024-11-19 02:01:54.671490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.296 [2024-11-19 02:01:54.671542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.296 [2024-11-19 02:01:54.675654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.296 [2024-11-19 02:01:54.675988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.296 [2024-11-19 02:01:54.676020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.296 [2024-11-19 02:01:54.680213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.296 [2024-11-19 02:01:54.680545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.296 [2024-11-19 02:01:54.680586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.296 [2024-11-19 02:01:54.684778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.296 [2024-11-19 02:01:54.685111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.296 [2024-11-19 02:01:54.685157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.296 [2024-11-19 02:01:54.689447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.296 [2024-11-19 02:01:54.689828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.296 [2024-11-19 02:01:54.689864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.296 [2024-11-19 02:01:54.694127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.296 [2024-11-19 02:01:54.694487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.296 [2024-11-19 02:01:54.694549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.296 [2024-11-19 02:01:54.698773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.296 [2024-11-19 02:01:54.699117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.296 [2024-11-19 02:01:54.699149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.296 [2024-11-19 02:01:54.703356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.296 [2024-11-19 02:01:54.703691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.296 [2024-11-19 02:01:54.703722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.296 [2024-11-19 02:01:54.707639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.296 [2024-11-19 02:01:54.707727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.296 [2024-11-19 02:01:54.707748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.296 [2024-11-19 02:01:54.712026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.296 [2024-11-19 02:01:54.712117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.296 [2024-11-19 02:01:54.712138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.296 [2024-11-19 02:01:54.716450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.296 [2024-11-19 02:01:54.716552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.296 [2024-11-19 02:01:54.716574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.296 [2024-11-19 02:01:54.720949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.296 [2024-11-19 02:01:54.721039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.296 [2024-11-19 02:01:54.721059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.296 [2024-11-19 02:01:54.725300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.296 [2024-11-19 02:01:54.725390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.296 [2024-11-19 02:01:54.725410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.296 [2024-11-19 02:01:54.730078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.296 [2024-11-19 02:01:54.730155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.297 [2024-11-19 02:01:54.730176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.297 [2024-11-19 02:01:54.734510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.297 [2024-11-19 02:01:54.734612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.297 [2024-11-19 02:01:54.734632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.297 [2024-11-19 02:01:54.738878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.297 [2024-11-19 02:01:54.738968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.297 [2024-11-19 02:01:54.738988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.297 [2024-11-19 02:01:54.743341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.297 [2024-11-19 02:01:54.743432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.297 [2024-11-19 02:01:54.743452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.297 [2024-11-19 02:01:54.748075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.297 [2024-11-19 02:01:54.748169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.297 [2024-11-19 02:01:54.748189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.297 [2024-11-19 02:01:54.752586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.297 [2024-11-19 02:01:54.752657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.297 [2024-11-19 02:01:54.752677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.297 [2024-11-19 02:01:54.757029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.297 [2024-11-19 02:01:54.757119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.297 [2024-11-19 02:01:54.757139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.297 [2024-11-19 02:01:54.761556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.297 [2024-11-19 02:01:54.761630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.297 [2024-11-19 02:01:54.761652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.297 [2024-11-19 02:01:54.765865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.297 [2024-11-19 02:01:54.765966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.297 [2024-11-19 02:01:54.765988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.297 [2024-11-19 02:01:54.770366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.297 [2024-11-19 02:01:54.770455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.297 [2024-11-19 02:01:54.770475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.297 [2024-11-19 02:01:54.774779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.297 [2024-11-19 02:01:54.774871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.297 [2024-11-19 02:01:54.774891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.297 [2024-11-19 02:01:54.779198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.297 [2024-11-19 02:01:54.779290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.297 [2024-11-19 02:01:54.779310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.297 [2024-11-19 02:01:54.783666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.297 [2024-11-19 02:01:54.783756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.297 [2024-11-19 02:01:54.783776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.297 [2024-11-19 02:01:54.788059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.297 [2024-11-19 02:01:54.788149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.297 [2024-11-19 02:01:54.788170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.297 [2024-11-19 02:01:54.792792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.297 [2024-11-19 02:01:54.792879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.297 [2024-11-19 02:01:54.792899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.297 [2024-11-19 02:01:54.797223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.297 [2024-11-19 02:01:54.797314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.297 [2024-11-19 02:01:54.797335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.297 [2024-11-19 02:01:54.801801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.297 [2024-11-19 02:01:54.801876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.297 [2024-11-19 02:01:54.801896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.297 [2024-11-19 02:01:54.806359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.297 [2024-11-19 02:01:54.806445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.297 [2024-11-19 02:01:54.806465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.297 [2024-11-19 02:01:54.810926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.297 [2024-11-19 02:01:54.811018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.297 [2024-11-19 02:01:54.811040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:44.297 [2024-11-19 02:01:54.815447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.297 [2024-11-19 02:01:54.815549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.297 [2024-11-19 02:01:54.815582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.297 [2024-11-19 02:01:54.820085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.297 [2024-11-19 02:01:54.820179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.297 [2024-11-19 02:01:54.820199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.297 6624.00 IOPS, 828.00 MiB/s [2024-11-19T02:01:54.912Z] [2024-11-19 02:01:54.825782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xfcd5e0) with pdu=0x2000166ff3c8 00:20:44.297 [2024-11-19 02:01:54.825864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.297 [2024-11-19 02:01:54.825885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:44.297 00:20:44.297 Latency(us) 00:20:44.297 [2024-11-19T02:01:54.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.297 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:44.297 nvme0n1 : 2.00 6621.21 827.65 0.00 0.00 2410.99 1794.79 9711.24 00:20:44.297 [2024-11-19T02:01:54.912Z] =================================================================================================================== 00:20:44.297 [2024-11-19T02:01:54.912Z] Total : 6621.21 827.65 0.00 0.00 2410.99 1794.79 9711.24 00:20:44.297 { 00:20:44.297 "results": [ 00:20:44.297 { 00:20:44.297 "job": "nvme0n1", 00:20:44.297 "core_mask": "0x2", 00:20:44.297 "workload": "randwrite", 00:20:44.297 "status": "finished", 00:20:44.297 "queue_depth": 16, 00:20:44.297 "io_size": 131072, 00:20:44.297 "runtime": 2.004014, 00:20:44.297 "iops": 6621.21122906327, 00:20:44.297 "mibps": 827.6514036329088, 00:20:44.297 "io_failed": 0, 00:20:44.297 "io_timeout": 0, 00:20:44.297 "avg_latency_us": 2410.9921060023707, 00:20:44.297 "min_latency_us": 1794.7927272727272, 00:20:44.297 "max_latency_us": 9711.243636363637 00:20:44.297 } 00:20:44.297 ], 00:20:44.297 "core_count": 1 00:20:44.297 } 00:20:44.297 02:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:44.297 02:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:44.297 02:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:44.297 02:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:44.297 | .driver_specific 00:20:44.297 | .nvme_error 00:20:44.297 | .status_code 00:20:44.298 | .command_transient_transport_error' 00:20:44.557 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 429 > 0 )) 00:20:44.557 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94956 00:20:44.557 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 94956 ']' 00:20:44.557 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 94956 00:20:44.557 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:44.557 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:44.557 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94956 00:20:44.557 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:44.557 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:44.557 killing process with pid 94956 00:20:44.557 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94956' 00:20:44.557 Received shutdown signal, test time was about 2.000000 seconds 00:20:44.557 00:20:44.557 Latency(us) 00:20:44.557 [2024-11-19T02:01:55.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.557 [2024-11-19T02:01:55.172Z] =================================================================================================================== 00:20:44.557 [2024-11-19T02:01:55.172Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:44.557 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 94956 00:20:44.557 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 94956 00:20:44.816 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 94784 00:20:44.816 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 94784 ']' 00:20:44.816 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 94784 00:20:44.816 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:44.816 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:44.816 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94784 00:20:44.816 killing process with pid 94784 00:20:44.816 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:44.816 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:44.816 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94784' 00:20:44.816 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 94784 00:20:44.816 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 94784 00:20:44.816 ************************************ 00:20:44.816 END TEST nvmf_digest_error 00:20:44.816 ************************************ 00:20:44.816 00:20:44.816 real 0m14.316s 00:20:44.816 user 0m27.651s 00:20:44.816 sys 0m4.365s 00:20:44.816 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:44.816 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:45.075 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:20:45.075 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:20:45.075 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:45.075 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:20:45.075 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:45.075 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:20:45.075 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:45.075 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:45.075 rmmod nvme_tcp 00:20:45.075 rmmod nvme_fabrics 00:20:45.075 rmmod nvme_keyring 00:20:45.075 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:45.075 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:20:45.075 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:20:45.075 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 94784 ']' 00:20:45.075 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 94784 00:20:45.075 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 94784 ']' 00:20:45.075 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 94784 00:20:45.075 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (94784) - No such process 00:20:45.075 Process with pid 94784 is not found 00:20:45.075 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 94784 is not found' 00:20:45.075 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:45.075 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:45.075 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:45.075 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:20:45.075 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:20:45.075 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:45.075 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:20:45.075 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:45.075 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:45.075 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:45.075 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:45.075 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:45.075 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:45.075 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:45.075 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:45.075 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:45.075 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:45.075 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:45.075 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:45.334 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:45.334 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:45.334 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:45.334 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:45.334 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.334 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:45.335 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.335 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:20:45.335 00:20:45.335 real 0m31.448s 00:20:45.335 user 0m59.587s 00:20:45.335 sys 0m9.120s 00:20:45.335 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:45.335 02:01:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:45.335 ************************************ 00:20:45.335 END TEST nvmf_digest 00:20:45.335 ************************************ 00:20:45.335 02:01:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:20:45.335 02:01:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:20:45.335 02:01:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:20:45.335 02:01:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:45.335 02:01:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:45.335 02:01:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.335 ************************************ 00:20:45.335 START TEST nvmf_host_multipath 00:20:45.335 ************************************ 00:20:45.335 02:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:20:45.335 * Looking for test storage... 00:20:45.335 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:45.335 02:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:45.335 02:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:20:45.335 02:01:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:45.594 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:45.594 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:45.594 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:45.594 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:45.594 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:20:45.594 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:20:45.594 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:20:45.594 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:20:45.594 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:20:45.594 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:20:45.594 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:20:45.594 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:45.594 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:20:45.594 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:20:45.594 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:45.594 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:45.594 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:20:45.594 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:20:45.594 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:45.594 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:20:45.594 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:20:45.594 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:20:45.594 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:20:45.594 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:45.594 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:20:45.594 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:20:45.594 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:45.594 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:45.594 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:20:45.594 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:45.594 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:45.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.594 --rc genhtml_branch_coverage=1 00:20:45.594 --rc genhtml_function_coverage=1 00:20:45.594 --rc genhtml_legend=1 00:20:45.594 --rc geninfo_all_blocks=1 00:20:45.595 --rc geninfo_unexecuted_blocks=1 00:20:45.595 00:20:45.595 ' 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:45.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.595 --rc genhtml_branch_coverage=1 00:20:45.595 --rc genhtml_function_coverage=1 00:20:45.595 --rc genhtml_legend=1 00:20:45.595 --rc geninfo_all_blocks=1 00:20:45.595 --rc geninfo_unexecuted_blocks=1 00:20:45.595 00:20:45.595 ' 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:45.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.595 --rc genhtml_branch_coverage=1 00:20:45.595 --rc genhtml_function_coverage=1 00:20:45.595 --rc genhtml_legend=1 00:20:45.595 --rc geninfo_all_blocks=1 00:20:45.595 --rc geninfo_unexecuted_blocks=1 00:20:45.595 00:20:45.595 ' 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:45.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.595 --rc genhtml_branch_coverage=1 00:20:45.595 --rc genhtml_function_coverage=1 00:20:45.595 --rc genhtml_legend=1 00:20:45.595 --rc geninfo_all_blocks=1 00:20:45.595 --rc geninfo_unexecuted_blocks=1 00:20:45.595 00:20:45.595 ' 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:45.595 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:45.595 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:45.596 Cannot find device "nvmf_init_br" 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:45.596 Cannot find device "nvmf_init_br2" 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:45.596 Cannot find device "nvmf_tgt_br" 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:45.596 Cannot find device "nvmf_tgt_br2" 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:45.596 Cannot find device "nvmf_init_br" 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:45.596 Cannot find device "nvmf_init_br2" 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:45.596 Cannot find device "nvmf_tgt_br" 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:45.596 Cannot find device "nvmf_tgt_br2" 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:45.596 Cannot find device "nvmf_br" 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:45.596 Cannot find device "nvmf_init_if" 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:45.596 Cannot find device "nvmf_init_if2" 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:45.596 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:45.596 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:45.596 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:45.855 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:45.855 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:45.855 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:45.855 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:45.855 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:45.855 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:45.855 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:45.855 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:45.855 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:45.855 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:45.855 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:45.855 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:45.855 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:45.855 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:45.855 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:45.855 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:45.855 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:45.855 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:45.855 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:45.855 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:45.855 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:45.855 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:45.855 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:45.855 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:45.855 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:45.855 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:45.855 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:45.855 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:45.855 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:45.855 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:45.855 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:45.855 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:45.855 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:20:45.855 00:20:45.856 --- 10.0.0.3 ping statistics --- 00:20:45.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.856 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:20:45.856 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:45.856 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:45.856 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:20:45.856 00:20:45.856 --- 10.0.0.4 ping statistics --- 00:20:45.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.856 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:20:45.856 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:45.856 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:45.856 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:20:45.856 00:20:45.856 --- 10.0.0.1 ping statistics --- 00:20:45.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.856 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:20:45.856 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:45.856 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:45.856 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:20:45.856 00:20:45.856 --- 10.0.0.2 ping statistics --- 00:20:45.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.856 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:20:45.856 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:45.856 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:20:45.856 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:45.856 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:45.856 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:45.856 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:45.856 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:45.856 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:45.856 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:45.856 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:20:45.856 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:45.856 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:45.856 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:45.856 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=95264 00:20:45.856 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 95264 00:20:45.856 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:45.856 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 95264 ']' 00:20:45.856 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.856 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:45.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.856 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.856 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:45.856 02:01:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:46.115 [2024-11-19 02:01:56.508438] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:20:46.115 [2024-11-19 02:01:56.508570] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.115 [2024-11-19 02:01:56.652273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:46.115 [2024-11-19 02:01:56.669837] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.115 [2024-11-19 02:01:56.669963] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.115 [2024-11-19 02:01:56.669991] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.115 [2024-11-19 02:01:56.669999] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.115 [2024-11-19 02:01:56.670006] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.115 [2024-11-19 02:01:56.670795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.115 [2024-11-19 02:01:56.670809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.115 [2024-11-19 02:01:56.698215] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:47.051 02:01:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:47.051 02:01:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:20:47.051 02:01:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:47.051 02:01:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:47.051 02:01:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:47.051 02:01:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.051 02:01:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=95264 00:20:47.051 02:01:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:47.310 [2024-11-19 02:01:57.802662] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:47.310 02:01:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:47.570 Malloc0 00:20:47.570 02:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:20:47.828 02:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:48.087 02:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:48.345 [2024-11-19 02:01:58.763805] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:48.345 02:01:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:48.604 [2024-11-19 02:01:59.031991] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:48.604 02:01:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=95321 00:20:48.604 02:01:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:20:48.604 02:01:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:48.604 02:01:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 95321 /var/tmp/bdevperf.sock 00:20:48.604 02:01:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 95321 ']' 00:20:48.604 02:01:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:48.604 02:01:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:48.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:48.604 02:01:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:48.604 02:01:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:48.604 02:01:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:49.539 02:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:49.539 02:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:20:49.539 02:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:49.798 02:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:50.056 Nvme0n1 00:20:50.056 02:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:50.315 Nvme0n1 00:20:50.315 02:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:20:50.315 02:02:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:20:51.695 02:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:20:51.695 02:02:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:51.695 02:02:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:51.954 02:02:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:20:51.954 02:02:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95361 00:20:51.954 02:02:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95264 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:51.954 02:02:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:58.518 02:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:58.518 02:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:58.518 02:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:58.518 02:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:58.518 Attaching 4 probes... 00:20:58.518 @path[10.0.0.3, 4421]: 20200 00:20:58.518 @path[10.0.0.3, 4421]: 20688 00:20:58.518 @path[10.0.0.3, 4421]: 20736 00:20:58.518 @path[10.0.0.3, 4421]: 20492 00:20:58.518 @path[10.0.0.3, 4421]: 20545 00:20:58.518 02:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:58.518 02:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:58.518 02:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:58.518 02:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:58.518 02:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:58.518 02:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:58.518 02:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95361 00:20:58.518 02:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:58.518 02:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:20:58.518 02:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:58.518 02:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:58.798 02:02:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:20:58.798 02:02:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95482 00:20:58.798 02:02:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95264 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:58.798 02:02:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:05.367 02:02:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:05.367 02:02:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:05.367 02:02:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:05.367 02:02:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:05.367 Attaching 4 probes... 00:21:05.367 @path[10.0.0.3, 4420]: 20367 00:21:05.367 @path[10.0.0.3, 4420]: 20618 00:21:05.367 @path[10.0.0.3, 4420]: 20692 00:21:05.368 @path[10.0.0.3, 4420]: 20723 00:21:05.368 @path[10.0.0.3, 4420]: 20828 00:21:05.368 02:02:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:05.368 02:02:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:05.368 02:02:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:05.368 02:02:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:05.368 02:02:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:05.368 02:02:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:05.368 02:02:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95482 00:21:05.368 02:02:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:05.368 02:02:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:21:05.368 02:02:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:05.368 02:02:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:05.368 02:02:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:21:05.368 02:02:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95264 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:05.368 02:02:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95601 00:21:05.368 02:02:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:12.014 02:02:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:12.014 02:02:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:12.014 02:02:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:12.014 02:02:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:12.014 Attaching 4 probes... 00:21:12.014 @path[10.0.0.3, 4421]: 15249 00:21:12.014 @path[10.0.0.3, 4421]: 20337 00:21:12.014 @path[10.0.0.3, 4421]: 20295 00:21:12.014 @path[10.0.0.3, 4421]: 20244 00:21:12.014 @path[10.0.0.3, 4421]: 20200 00:21:12.014 02:02:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:12.014 02:02:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:12.014 02:02:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:12.014 02:02:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:12.014 02:02:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:12.014 02:02:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:12.014 02:02:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95601 00:21:12.014 02:02:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:12.014 02:02:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:21:12.014 02:02:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:12.014 02:02:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:12.272 02:02:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:21:12.272 02:02:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95708 00:21:12.272 02:02:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95264 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:12.272 02:02:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:18.832 02:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:18.832 02:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:21:18.832 02:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:21:18.832 02:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:18.832 Attaching 4 probes... 00:21:18.832 00:21:18.832 00:21:18.832 00:21:18.832 00:21:18.832 00:21:18.832 02:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:18.832 02:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:18.832 02:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:18.832 02:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:21:18.832 02:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:21:18.832 02:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:21:18.832 02:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95708 00:21:18.832 02:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:18.832 02:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:21:18.832 02:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:18.832 02:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:19.091 02:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:21:19.091 02:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95264 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:19.091 02:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95828 00:21:19.091 02:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:25.661 02:02:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:25.661 02:02:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:25.661 02:02:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:25.661 02:02:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:25.661 Attaching 4 probes... 00:21:25.661 @path[10.0.0.3, 4421]: 19580 00:21:25.661 @path[10.0.0.3, 4421]: 19982 00:21:25.661 @path[10.0.0.3, 4421]: 20080 00:21:25.661 @path[10.0.0.3, 4421]: 19982 00:21:25.661 @path[10.0.0.3, 4421]: 20200 00:21:25.661 02:02:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:25.661 02:02:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:25.661 02:02:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:25.661 02:02:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:25.661 02:02:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:25.661 02:02:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:25.661 02:02:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95828 00:21:25.661 02:02:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:25.661 02:02:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:25.661 02:02:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:21:26.598 02:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:21:26.598 02:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95947 00:21:26.598 02:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:26.598 02:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95264 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:33.161 02:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:33.161 02:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:33.161 02:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:33.161 02:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:33.161 Attaching 4 probes... 00:21:33.161 @path[10.0.0.3, 4420]: 19895 00:21:33.161 @path[10.0.0.3, 4420]: 19936 00:21:33.161 @path[10.0.0.3, 4420]: 19933 00:21:33.161 @path[10.0.0.3, 4420]: 20056 00:21:33.161 @path[10.0.0.3, 4420]: 20076 00:21:33.161 02:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:33.161 02:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:33.161 02:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:33.161 02:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:33.161 02:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:33.161 02:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:33.161 02:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95947 00:21:33.161 02:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:33.161 02:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:33.161 [2024-11-19 02:02:43.544445] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:33.161 02:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:33.419 02:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:21:39.995 02:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:21:39.995 02:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96128 00:21:39.995 02:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:39.995 02:02:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95264 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:45.266 02:02:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:45.266 02:02:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:45.525 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:45.525 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:45.525 Attaching 4 probes... 00:21:45.525 @path[10.0.0.3, 4421]: 19570 00:21:45.525 @path[10.0.0.3, 4421]: 19932 00:21:45.525 @path[10.0.0.3, 4421]: 20030 00:21:45.525 @path[10.0.0.3, 4421]: 20069 00:21:45.525 @path[10.0.0.3, 4421]: 19979 00:21:45.525 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:45.525 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:45.525 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:45.525 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:45.525 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:45.525 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:45.525 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96128 00:21:45.525 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:45.525 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 95321 00:21:45.525 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 95321 ']' 00:21:45.525 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 95321 00:21:45.525 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:21:45.525 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:45.525 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95321 00:21:45.525 killing process with pid 95321 00:21:45.525 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:45.525 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:45.525 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95321' 00:21:45.525 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 95321 00:21:45.525 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 95321 00:21:45.525 { 00:21:45.525 "results": [ 00:21:45.525 { 00:21:45.525 "job": "Nvme0n1", 00:21:45.525 "core_mask": "0x4", 00:21:45.525 "workload": "verify", 00:21:45.525 "status": "terminated", 00:21:45.525 "verify_range": { 00:21:45.525 "start": 0, 00:21:45.525 "length": 16384 00:21:45.525 }, 00:21:45.525 "queue_depth": 128, 00:21:45.525 "io_size": 4096, 00:21:45.525 "runtime": 55.072119, 00:21:45.525 "iops": 8557.960154030028, 00:21:45.525 "mibps": 33.4295318516798, 00:21:45.525 "io_failed": 0, 00:21:45.525 "io_timeout": 0, 00:21:45.525 "avg_latency_us": 14927.034857327479, 00:21:45.525 "min_latency_us": 203.86909090909091, 00:21:45.525 "max_latency_us": 7046430.72 00:21:45.525 } 00:21:45.525 ], 00:21:45.525 "core_count": 1 00:21:45.525 } 00:21:45.793 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 95321 00:21:45.793 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:45.793 [2024-11-19 02:01:59.092104] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:21:45.793 [2024-11-19 02:01:59.092200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95321 ] 00:21:45.793 [2024-11-19 02:01:59.238786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.793 [2024-11-19 02:01:59.262685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:45.793 [2024-11-19 02:01:59.296101] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:45.793 Running I/O for 90 seconds... 00:21:45.793 7956.00 IOPS, 31.08 MiB/s [2024-11-19T02:02:56.408Z] 8860.00 IOPS, 34.61 MiB/s [2024-11-19T02:02:56.408Z] 9333.33 IOPS, 36.46 MiB/s [2024-11-19T02:02:56.408Z] 9582.00 IOPS, 37.43 MiB/s [2024-11-19T02:02:56.408Z] 9742.40 IOPS, 38.06 MiB/s [2024-11-19T02:02:56.408Z] 9827.33 IOPS, 38.39 MiB/s [2024-11-19T02:02:56.408Z] 9890.29 IOPS, 38.63 MiB/s [2024-11-19T02:02:56.408Z] 9916.00 IOPS, 38.73 MiB/s [2024-11-19T02:02:56.408Z] [2024-11-19 02:02:09.147653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:122624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.794 [2024-11-19 02:02:09.147709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:45.794 [2024-11-19 02:02:09.147780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:122632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.794 [2024-11-19 02:02:09.147802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:45.794 [2024-11-19 02:02:09.147824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:122640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.794 [2024-11-19 02:02:09.147839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:45.794 [2024-11-19 02:02:09.147873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:122648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.794 [2024-11-19 02:02:09.147888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:45.794 [2024-11-19 02:02:09.147921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:122656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.794 [2024-11-19 02:02:09.147935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:45.794 [2024-11-19 02:02:09.147953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:122664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.794 [2024-11-19 02:02:09.147967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:45.794 [2024-11-19 02:02:09.147986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:122672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.794 [2024-11-19 02:02:09.147999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:45.794 [2024-11-19 02:02:09.148017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:122680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.794 [2024-11-19 02:02:09.148035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:45.794 [2024-11-19 02:02:09.148053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:122176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.794 [2024-11-19 02:02:09.148066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.794 [2024-11-19 02:02:09.148085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:122184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.794 [2024-11-19 02:02:09.148124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:45.794 [2024-11-19 02:02:09.148146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:122192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.794 [2024-11-19 02:02:09.148161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:45.794 [2024-11-19 02:02:09.148179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:122200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.794 [2024-11-19 02:02:09.148193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:45.794 [2024-11-19 02:02:09.148211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:122208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.794 [2024-11-19 02:02:09.148225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:45.794 [2024-11-19 02:02:09.148243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:122216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.794 [2024-11-19 02:02:09.148256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:45.794 [2024-11-19 02:02:09.148275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:122224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.794 [2024-11-19 02:02:09.148288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:45.794 [2024-11-19 02:02:09.148307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:122232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.794 [2024-11-19 02:02:09.148321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:45.794 [2024-11-19 02:02:09.148339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:122240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.794 [2024-11-19 02:02:09.148354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:45.794 [2024-11-19 02:02:09.148374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:122248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.794 [2024-11-19 02:02:09.148387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:45.794 [2024-11-19 02:02:09.148406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:122256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.794 [2024-11-19 02:02:09.148419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:45.794 [2024-11-19 02:02:09.148438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:122264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.794 [2024-11-19 02:02:09.148451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:45.794 [2024-11-19 02:02:09.148470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:122272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.794 [2024-11-19 02:02:09.148500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:45.794 [2024-11-19 02:02:09.148519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:122280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.794 [2024-11-19 02:02:09.148541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:45.794 [2024-11-19 02:02:09.148583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:122288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.794 [2024-11-19 02:02:09.148602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:45.794 [2024-11-19 02:02:09.148622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:122296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.794 [2024-11-19 02:02:09.148636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:45.794 [2024-11-19 02:02:09.148660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:122688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.794 [2024-11-19 02:02:09.148676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:45.794 [2024-11-19 02:02:09.148696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:122696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.794 [2024-11-19 02:02:09.148710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:45.794 [2024-11-19 02:02:09.148729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:122704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.794 [2024-11-19 02:02:09.148743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:45.794 [2024-11-19 02:02:09.148761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:122712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.794 [2024-11-19 02:02:09.148776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:45.794 [2024-11-19 02:02:09.148796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:122720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.794 [2024-11-19 02:02:09.148810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:45.794 [2024-11-19 02:02:09.148829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:122728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.794 [2024-11-19 02:02:09.148843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:45.794 [2024-11-19 02:02:09.148862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:122736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.794 [2024-11-19 02:02:09.148876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:45.794 [2024-11-19 02:02:09.148895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:122744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.794 [2024-11-19 02:02:09.148909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:45.794 [2024-11-19 02:02:09.148927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:122304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.794 [2024-11-19 02:02:09.148942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:45.794 [2024-11-19 02:02:09.148963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:122312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.795 [2024-11-19 02:02:09.148977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:45.795 [2024-11-19 02:02:09.149035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:122320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.795 [2024-11-19 02:02:09.149052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:45.795 [2024-11-19 02:02:09.149072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:122328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.795 [2024-11-19 02:02:09.149086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:45.795 [2024-11-19 02:02:09.149106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:122336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.795 [2024-11-19 02:02:09.149120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:45.795 [2024-11-19 02:02:09.149139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:122344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.795 [2024-11-19 02:02:09.149153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:45.795 [2024-11-19 02:02:09.149172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.795 [2024-11-19 02:02:09.149186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:45.795 [2024-11-19 02:02:09.149206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:122360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.795 [2024-11-19 02:02:09.149220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:45.795 [2024-11-19 02:02:09.149239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:122368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.795 [2024-11-19 02:02:09.149252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.795 [2024-11-19 02:02:09.149271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:122376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.795 [2024-11-19 02:02:09.149285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:45.795 [2024-11-19 02:02:09.149304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:122384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.795 [2024-11-19 02:02:09.149319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:45.795 [2024-11-19 02:02:09.149337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:122392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.795 [2024-11-19 02:02:09.149351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:45.795 [2024-11-19 02:02:09.149370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:122400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.795 [2024-11-19 02:02:09.149384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:45.795 [2024-11-19 02:02:09.149403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:122408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.795 [2024-11-19 02:02:09.149417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:45.795 [2024-11-19 02:02:09.149443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:122416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.795 [2024-11-19 02:02:09.149458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:45.795 [2024-11-19 02:02:09.149477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:122424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.795 [2024-11-19 02:02:09.149491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:45.795 [2024-11-19 02:02:09.149528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:122752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.795 [2024-11-19 02:02:09.149545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:45.795 [2024-11-19 02:02:09.149566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:122760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.795 [2024-11-19 02:02:09.149581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:45.795 [2024-11-19 02:02:09.149600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:122768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.795 [2024-11-19 02:02:09.149614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:45.795 [2024-11-19 02:02:09.149633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:122776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.795 [2024-11-19 02:02:09.149647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:45.795 [2024-11-19 02:02:09.149667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:122784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.795 [2024-11-19 02:02:09.149681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:45.795 [2024-11-19 02:02:09.149700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:122792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.795 [2024-11-19 02:02:09.149714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:45.795 [2024-11-19 02:02:09.149735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:122800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.795 [2024-11-19 02:02:09.149749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:45.795 [2024-11-19 02:02:09.149768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:122808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.795 [2024-11-19 02:02:09.149781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:45.795 [2024-11-19 02:02:09.149800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:122816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.795 [2024-11-19 02:02:09.149814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:45.795 [2024-11-19 02:02:09.149833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:122824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.795 [2024-11-19 02:02:09.149847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:45.795 [2024-11-19 02:02:09.149873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:122832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.795 [2024-11-19 02:02:09.149889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:45.795 [2024-11-19 02:02:09.149908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:122840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.795 [2024-11-19 02:02:09.149922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:45.795 [2024-11-19 02:02:09.149950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:122848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.795 [2024-11-19 02:02:09.150000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:45.795 [2024-11-19 02:02:09.150022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:122856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.795 [2024-11-19 02:02:09.150037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:45.795 [2024-11-19 02:02:09.150058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:122864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.795 [2024-11-19 02:02:09.150073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:45.795 [2024-11-19 02:02:09.150094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:122872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.795 [2024-11-19 02:02:09.150108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:45.795 [2024-11-19 02:02:09.150129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:122880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.795 [2024-11-19 02:02:09.150143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:45.795 [2024-11-19 02:02:09.150165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:122888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.795 [2024-11-19 02:02:09.150179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:45.795 [2024-11-19 02:02:09.150200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:122896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.795 [2024-11-19 02:02:09.150215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:45.795 [2024-11-19 02:02:09.150235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:122904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.796 [2024-11-19 02:02:09.150250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:45.796 [2024-11-19 02:02:09.150270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:122912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.796 [2024-11-19 02:02:09.150299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:45.796 [2024-11-19 02:02:09.150333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:122920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.796 [2024-11-19 02:02:09.150347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:45.796 [2024-11-19 02:02:09.150366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:122928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.796 [2024-11-19 02:02:09.150386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:45.796 [2024-11-19 02:02:09.150407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:122936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.796 [2024-11-19 02:02:09.150421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:45.796 [2024-11-19 02:02:09.150440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.796 [2024-11-19 02:02:09.150454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.796 [2024-11-19 02:02:09.150474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:122440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.796 [2024-11-19 02:02:09.150487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:45.796 [2024-11-19 02:02:09.150506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:122448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.796 [2024-11-19 02:02:09.150536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:45.796 [2024-11-19 02:02:09.150556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:122456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.796 [2024-11-19 02:02:09.150582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:45.796 [2024-11-19 02:02:09.150605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:122464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.796 [2024-11-19 02:02:09.150620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:45.796 [2024-11-19 02:02:09.150640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.796 [2024-11-19 02:02:09.150655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:45.796 [2024-11-19 02:02:09.150674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:122480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.796 [2024-11-19 02:02:09.150694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:45.796 [2024-11-19 02:02:09.150715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:122488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.796 [2024-11-19 02:02:09.150729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:45.796 [2024-11-19 02:02:09.150749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.796 [2024-11-19 02:02:09.150763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:45.796 [2024-11-19 02:02:09.150783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:122504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.796 [2024-11-19 02:02:09.150797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:45.796 [2024-11-19 02:02:09.150817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:122512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.796 [2024-11-19 02:02:09.150838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:45.796 [2024-11-19 02:02:09.150860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:122520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.796 [2024-11-19 02:02:09.150874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:45.796 [2024-11-19 02:02:09.150909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:122528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.796 [2024-11-19 02:02:09.150923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:45.796 [2024-11-19 02:02:09.150942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.796 [2024-11-19 02:02:09.150956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:45.796 [2024-11-19 02:02:09.150975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.796 [2024-11-19 02:02:09.150989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:45.796 [2024-11-19 02:02:09.151008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.796 [2024-11-19 02:02:09.151022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:45.796 [2024-11-19 02:02:09.151045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:122944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.796 [2024-11-19 02:02:09.151061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:45.796 [2024-11-19 02:02:09.151080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:122952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.796 [2024-11-19 02:02:09.151094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:45.796 [2024-11-19 02:02:09.151113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:122960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.796 [2024-11-19 02:02:09.151127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:45.796 [2024-11-19 02:02:09.151146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:122968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.796 [2024-11-19 02:02:09.151160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:45.796 [2024-11-19 02:02:09.151179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.796 [2024-11-19 02:02:09.151192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:45.796 [2024-11-19 02:02:09.151212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.796 [2024-11-19 02:02:09.151226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:45.796 [2024-11-19 02:02:09.151245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.796 [2024-11-19 02:02:09.151260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:45.796 [2024-11-19 02:02:09.151286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.796 [2024-11-19 02:02:09.151302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:45.796 [2024-11-19 02:02:09.151321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:123008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.796 [2024-11-19 02:02:09.151335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:45.796 [2024-11-19 02:02:09.151354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:123016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.796 [2024-11-19 02:02:09.151367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:45.796 [2024-11-19 02:02:09.151386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.796 [2024-11-19 02:02:09.151400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:45.796 [2024-11-19 02:02:09.151419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.796 [2024-11-19 02:02:09.151433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:45.796 [2024-11-19 02:02:09.151452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.796 [2024-11-19 02:02:09.151466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:45.796 [2024-11-19 02:02:09.151485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:123048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.797 [2024-11-19 02:02:09.151499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.797 [2024-11-19 02:02:09.151545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:122560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.797 [2024-11-19 02:02:09.151562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.797 [2024-11-19 02:02:09.151582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:122568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.797 [2024-11-19 02:02:09.151597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.797 [2024-11-19 02:02:09.151616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:122576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.797 [2024-11-19 02:02:09.151631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.797 [2024-11-19 02:02:09.151651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:122584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.797 [2024-11-19 02:02:09.151665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:45.797 [2024-11-19 02:02:09.151685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.797 [2024-11-19 02:02:09.151699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:45.797 [2024-11-19 02:02:09.151726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:122600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.797 [2024-11-19 02:02:09.151741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:45.797 [2024-11-19 02:02:09.151761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:122608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.797 [2024-11-19 02:02:09.151775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:45.797 [2024-11-19 02:02:09.153034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:122616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.797 [2024-11-19 02:02:09.153064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:45.797 [2024-11-19 02:02:09.153091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:123056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.797 [2024-11-19 02:02:09.153111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:45.797 [2024-11-19 02:02:09.153132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:123064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.797 [2024-11-19 02:02:09.153147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:45.797 [2024-11-19 02:02:09.153171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:123072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.797 [2024-11-19 02:02:09.153186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:45.797 [2024-11-19 02:02:09.153205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:123080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.797 [2024-11-19 02:02:09.153219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:45.797 [2024-11-19 02:02:09.153239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:123088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.797 [2024-11-19 02:02:09.153253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:45.797 [2024-11-19 02:02:09.153272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:123096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.797 [2024-11-19 02:02:09.153286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:45.797 [2024-11-19 02:02:09.153306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:123104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.797 [2024-11-19 02:02:09.153321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:45.797 [2024-11-19 02:02:09.153463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.797 [2024-11-19 02:02:09.153487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:45.797 [2024-11-19 02:02:09.153541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.797 [2024-11-19 02:02:09.153559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:45.797 [2024-11-19 02:02:09.153580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.797 [2024-11-19 02:02:09.153622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:45.797 [2024-11-19 02:02:09.153645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:123136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.797 [2024-11-19 02:02:09.153660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:45.797 [2024-11-19 02:02:09.153680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:123144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.797 [2024-11-19 02:02:09.153694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:45.797 [2024-11-19 02:02:09.153713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:123152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.797 [2024-11-19 02:02:09.153728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:45.797 [2024-11-19 02:02:09.153747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.797 [2024-11-19 02:02:09.153761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:45.797 [2024-11-19 02:02:09.153780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:123168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.797 [2024-11-19 02:02:09.153795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:45.797 [2024-11-19 02:02:09.153819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:123176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.797 [2024-11-19 02:02:09.153835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:45.797 [2024-11-19 02:02:09.153868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:123184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.797 [2024-11-19 02:02:09.153885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:45.797 [2024-11-19 02:02:09.153905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:123192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.797 [2024-11-19 02:02:09.153920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:45.797 9910.33 IOPS, 38.71 MiB/s [2024-11-19T02:02:56.412Z] 9956.90 IOPS, 38.89 MiB/s [2024-11-19T02:02:56.412Z] 9994.27 IOPS, 39.04 MiB/s [2024-11-19T02:02:56.412Z] 10025.42 IOPS, 39.16 MiB/s [2024-11-19T02:02:56.412Z] 10051.77 IOPS, 39.26 MiB/s [2024-11-19T02:02:56.412Z] 10074.36 IOPS, 39.35 MiB/s [2024-11-19T02:02:56.412Z] [2024-11-19 02:02:15.688188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.797 [2024-11-19 02:02:15.688241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:45.797 [2024-11-19 02:02:15.688308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.797 [2024-11-19 02:02:15.688328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:45.797 [2024-11-19 02:02:15.688349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.797 [2024-11-19 02:02:15.688363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:45.797 [2024-11-19 02:02:15.688404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.797 [2024-11-19 02:02:15.688419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:45.797 [2024-11-19 02:02:15.688438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.797 [2024-11-19 02:02:15.688451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:45.797 [2024-11-19 02:02:15.688469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.798 [2024-11-19 02:02:15.688482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:45.798 [2024-11-19 02:02:15.688500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.798 [2024-11-19 02:02:15.688542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:45.798 [2024-11-19 02:02:15.688566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.798 [2024-11-19 02:02:15.688580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:45.798 [2024-11-19 02:02:15.688604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.798 [2024-11-19 02:02:15.688619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:45.798 [2024-11-19 02:02:15.688638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.798 [2024-11-19 02:02:15.688651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:45.798 [2024-11-19 02:02:15.688670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.798 [2024-11-19 02:02:15.688683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:45.798 [2024-11-19 02:02:15.688702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.798 [2024-11-19 02:02:15.688715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:45.798 [2024-11-19 02:02:15.688734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.798 [2024-11-19 02:02:15.688747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:45.798 [2024-11-19 02:02:15.688765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.798 [2024-11-19 02:02:15.688778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:45.798 [2024-11-19 02:02:15.688797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.798 [2024-11-19 02:02:15.688810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:45.798 [2024-11-19 02:02:15.688838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.798 [2024-11-19 02:02:15.688853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:45.798 [2024-11-19 02:02:15.688872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.798 [2024-11-19 02:02:15.688886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:45.798 [2024-11-19 02:02:15.688922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.798 [2024-11-19 02:02:15.688935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:45.798 [2024-11-19 02:02:15.688954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.798 [2024-11-19 02:02:15.688968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:45.798 [2024-11-19 02:02:15.688986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.798 [2024-11-19 02:02:15.689000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:45.798 [2024-11-19 02:02:15.689018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.798 [2024-11-19 02:02:15.689031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:45.798 [2024-11-19 02:02:15.689049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.798 [2024-11-19 02:02:15.689063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:45.798 [2024-11-19 02:02:15.689082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.798 [2024-11-19 02:02:15.689095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:45.798 [2024-11-19 02:02:15.689114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.798 [2024-11-19 02:02:15.689127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:45.798 [2024-11-19 02:02:15.689169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.798 [2024-11-19 02:02:15.689187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:45.798 [2024-11-19 02:02:15.689207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.798 [2024-11-19 02:02:15.689221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:45.798 [2024-11-19 02:02:15.689240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.798 [2024-11-19 02:02:15.689253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:45.798 [2024-11-19 02:02:15.689272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.798 [2024-11-19 02:02:15.689295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.798 [2024-11-19 02:02:15.689315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.798 [2024-11-19 02:02:15.689329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.798 [2024-11-19 02:02:15.689347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.798 [2024-11-19 02:02:15.689360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.798 [2024-11-19 02:02:15.689379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.798 [2024-11-19 02:02:15.689392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.798 [2024-11-19 02:02:15.689411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.798 [2024-11-19 02:02:15.689425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:45.798 [2024-11-19 02:02:15.689443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.798 [2024-11-19 02:02:15.689456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:45.798 [2024-11-19 02:02:15.689475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.798 [2024-11-19 02:02:15.689489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:45.798 [2024-11-19 02:02:15.689507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.798 [2024-11-19 02:02:15.689534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:45.798 [2024-11-19 02:02:15.689570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.799 [2024-11-19 02:02:15.689584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:45.799 [2024-11-19 02:02:15.689603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.799 [2024-11-19 02:02:15.689616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:45.799 [2024-11-19 02:02:15.689635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.799 [2024-11-19 02:02:15.689648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:45.799 [2024-11-19 02:02:15.689667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.799 [2024-11-19 02:02:15.689681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:45.799 [2024-11-19 02:02:15.689699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.799 [2024-11-19 02:02:15.689713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:45.799 [2024-11-19 02:02:15.689739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.799 [2024-11-19 02:02:15.689753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:45.799 [2024-11-19 02:02:15.689772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.799 [2024-11-19 02:02:15.689786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:45.799 [2024-11-19 02:02:15.689805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.799 [2024-11-19 02:02:15.689819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:45.799 [2024-11-19 02:02:15.689838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.799 [2024-11-19 02:02:15.689851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:45.799 [2024-11-19 02:02:15.689870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.799 [2024-11-19 02:02:15.689883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:45.799 [2024-11-19 02:02:15.689903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.799 [2024-11-19 02:02:15.689927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:45.799 [2024-11-19 02:02:15.689971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.799 [2024-11-19 02:02:15.690004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:45.799 [2024-11-19 02:02:15.690026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.799 [2024-11-19 02:02:15.690041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:45.799 [2024-11-19 02:02:15.690061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.799 [2024-11-19 02:02:15.690076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:45.799 [2024-11-19 02:02:15.690096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.799 [2024-11-19 02:02:15.690111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:45.799 [2024-11-19 02:02:15.690131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.799 [2024-11-19 02:02:15.690146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:45.799 [2024-11-19 02:02:15.690165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.799 [2024-11-19 02:02:15.690180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:45.799 [2024-11-19 02:02:15.690207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.799 [2024-11-19 02:02:15.690223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:45.799 [2024-11-19 02:02:15.690243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.799 [2024-11-19 02:02:15.690258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:45.799 [2024-11-19 02:02:15.690306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.799 [2024-11-19 02:02:15.690319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:45.799 [2024-11-19 02:02:15.690353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.799 [2024-11-19 02:02:15.690366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:45.799 [2024-11-19 02:02:15.690388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.799 [2024-11-19 02:02:15.690403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:45.799 [2024-11-19 02:02:15.690422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.799 [2024-11-19 02:02:15.690435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:45.799 [2024-11-19 02:02:15.690453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.799 [2024-11-19 02:02:15.690466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:45.799 [2024-11-19 02:02:15.690485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.799 [2024-11-19 02:02:15.690498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:45.799 [2024-11-19 02:02:15.690516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.799 [2024-11-19 02:02:15.690529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:45.799 [2024-11-19 02:02:15.690548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.799 [2024-11-19 02:02:15.690561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:45.799 [2024-11-19 02:02:15.690599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.799 [2024-11-19 02:02:15.690615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.799 [2024-11-19 02:02:15.690634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.799 [2024-11-19 02:02:15.690648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:45.799 [2024-11-19 02:02:15.690666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.800 [2024-11-19 02:02:15.690687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:45.800 [2024-11-19 02:02:15.690707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.800 [2024-11-19 02:02:15.690721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:45.800 [2024-11-19 02:02:15.690740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.800 [2024-11-19 02:02:15.690754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:45.800 [2024-11-19 02:02:15.690772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.800 [2024-11-19 02:02:15.690785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:45.800 [2024-11-19 02:02:15.690803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.800 [2024-11-19 02:02:15.690817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:45.800 [2024-11-19 02:02:15.690835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.800 [2024-11-19 02:02:15.690849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:45.800 [2024-11-19 02:02:15.690867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.800 [2024-11-19 02:02:15.690880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:45.800 [2024-11-19 02:02:15.690898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.800 [2024-11-19 02:02:15.690912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:45.800 [2024-11-19 02:02:15.690930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.800 [2024-11-19 02:02:15.690943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:45.800 [2024-11-19 02:02:15.690962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.800 [2024-11-19 02:02:15.690975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:45.800 [2024-11-19 02:02:15.690993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.800 [2024-11-19 02:02:15.691007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:45.800 [2024-11-19 02:02:15.691026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.800 [2024-11-19 02:02:15.691039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:45.800 [2024-11-19 02:02:15.691057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.800 [2024-11-19 02:02:15.691076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:45.800 [2024-11-19 02:02:15.691096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.800 [2024-11-19 02:02:15.691109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:45.800 [2024-11-19 02:02:15.691128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.800 [2024-11-19 02:02:15.691142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:45.800 [2024-11-19 02:02:15.691161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.800 [2024-11-19 02:02:15.691174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:45.800 [2024-11-19 02:02:15.691193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.800 [2024-11-19 02:02:15.691206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:45.800 [2024-11-19 02:02:15.691225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.800 [2024-11-19 02:02:15.691239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:45.800 [2024-11-19 02:02:15.691257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.800 [2024-11-19 02:02:15.691270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:45.800 [2024-11-19 02:02:15.691289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.800 [2024-11-19 02:02:15.691302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:45.800 [2024-11-19 02:02:15.691321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.800 [2024-11-19 02:02:15.691334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:45.800 [2024-11-19 02:02:15.691352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.800 [2024-11-19 02:02:15.691366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:45.800 [2024-11-19 02:02:15.691384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.800 [2024-11-19 02:02:15.691397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:45.800 [2024-11-19 02:02:15.691416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.800 [2024-11-19 02:02:15.691429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:45.800 [2024-11-19 02:02:15.691462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.800 [2024-11-19 02:02:15.691480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:45.800 [2024-11-19 02:02:15.691524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.800 [2024-11-19 02:02:15.691540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:45.800 [2024-11-19 02:02:15.691560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.800 [2024-11-19 02:02:15.691573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:45.800 [2024-11-19 02:02:15.691592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.800 [2024-11-19 02:02:15.691605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:45.800 [2024-11-19 02:02:15.691623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.800 [2024-11-19 02:02:15.691637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:45.800 [2024-11-19 02:02:15.691655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.800 [2024-11-19 02:02:15.691668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:45.800 [2024-11-19 02:02:15.691687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.800 [2024-11-19 02:02:15.691701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.800 [2024-11-19 02:02:15.691720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.800 [2024-11-19 02:02:15.691733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:45.800 [2024-11-19 02:02:15.691751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.800 [2024-11-19 02:02:15.691764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:45.800 [2024-11-19 02:02:15.691783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.800 [2024-11-19 02:02:15.691796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:45.801 [2024-11-19 02:02:15.691815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.801 [2024-11-19 02:02:15.691828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:45.801 [2024-11-19 02:02:15.691846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.801 [2024-11-19 02:02:15.691859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:45.801 [2024-11-19 02:02:15.691878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.801 [2024-11-19 02:02:15.691891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:45.801 [2024-11-19 02:02:15.691932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.801 [2024-11-19 02:02:15.691947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:45.801 [2024-11-19 02:02:15.691966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.801 [2024-11-19 02:02:15.691980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:45.801 [2024-11-19 02:02:15.691999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.801 [2024-11-19 02:02:15.692013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:45.801 [2024-11-19 02:02:15.692032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.801 [2024-11-19 02:02:15.692046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:45.801 [2024-11-19 02:02:15.692065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.801 [2024-11-19 02:02:15.692080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:45.801 [2024-11-19 02:02:15.692099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.801 [2024-11-19 02:02:15.692112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:45.801 [2024-11-19 02:02:15.692131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.801 [2024-11-19 02:02:15.692145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:45.801 [2024-11-19 02:02:15.692164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.801 [2024-11-19 02:02:15.692178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:45.801 [2024-11-19 02:02:15.692197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.801 [2024-11-19 02:02:15.692210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:45.801 [2024-11-19 02:02:15.692237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.801 [2024-11-19 02:02:15.692252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:45.801 [2024-11-19 02:02:15.692272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.801 [2024-11-19 02:02:15.692286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:45.801 [2024-11-19 02:02:15.692305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.801 [2024-11-19 02:02:15.692319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:45.801 [2024-11-19 02:02:15.692960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.801 [2024-11-19 02:02:15.692996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:45.801 [2024-11-19 02:02:15.693028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.801 [2024-11-19 02:02:15.693044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:45.801 [2024-11-19 02:02:15.693070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.801 [2024-11-19 02:02:15.693084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:45.801 [2024-11-19 02:02:15.693109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.801 [2024-11-19 02:02:15.693123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:45.801 [2024-11-19 02:02:15.693148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.801 [2024-11-19 02:02:15.693163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:45.801 [2024-11-19 02:02:15.693188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.801 [2024-11-19 02:02:15.693202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:45.801 [2024-11-19 02:02:15.693227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.801 [2024-11-19 02:02:15.693240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:45.801 [2024-11-19 02:02:15.693265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.801 [2024-11-19 02:02:15.693279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:45.801 [2024-11-19 02:02:15.693304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.801 [2024-11-19 02:02:15.693318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:45.801 [2024-11-19 02:02:15.693344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.801 [2024-11-19 02:02:15.693357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:45.801 [2024-11-19 02:02:15.693396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.801 [2024-11-19 02:02:15.693414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:45.801 [2024-11-19 02:02:15.693440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.801 [2024-11-19 02:02:15.693454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:45.801 [2024-11-19 02:02:15.693479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.801 [2024-11-19 02:02:15.693530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:45.801 [2024-11-19 02:02:15.693561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.801 [2024-11-19 02:02:15.693577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.801 [2024-11-19 02:02:15.693603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.801 [2024-11-19 02:02:15.693617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:45.801 9871.00 IOPS, 38.56 MiB/s [2024-11-19T02:02:56.416Z] 9446.12 IOPS, 36.90 MiB/s [2024-11-19T02:02:56.416Z] 9491.41 IOPS, 37.08 MiB/s [2024-11-19T02:02:56.416Z] 9528.11 IOPS, 37.22 MiB/s [2024-11-19T02:02:56.416Z] 9560.47 IOPS, 37.35 MiB/s [2024-11-19T02:02:56.416Z] 9586.45 IOPS, 37.45 MiB/s [2024-11-19T02:02:56.416Z] 9608.43 IOPS, 37.53 MiB/s [2024-11-19T02:02:56.416Z] [2024-11-19 02:02:22.660732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.801 [2024-11-19 02:02:22.660785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:45.802 [2024-11-19 02:02:22.660850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.802 [2024-11-19 02:02:22.660871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:45.802 [2024-11-19 02:02:22.660891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.802 [2024-11-19 02:02:22.660905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:45.802 [2024-11-19 02:02:22.660924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.802 [2024-11-19 02:02:22.660938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:45.802 [2024-11-19 02:02:22.660956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.802 [2024-11-19 02:02:22.660969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:45.802 [2024-11-19 02:02:22.660987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.802 [2024-11-19 02:02:22.661000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:45.802 [2024-11-19 02:02:22.661019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.802 [2024-11-19 02:02:22.661032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:45.802 [2024-11-19 02:02:22.661050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.802 [2024-11-19 02:02:22.661063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:45.802 [2024-11-19 02:02:22.661081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:92936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.802 [2024-11-19 02:02:22.661095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:45.802 [2024-11-19 02:02:22.661136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.802 [2024-11-19 02:02:22.661152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:45.802 [2024-11-19 02:02:22.661170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:92952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.802 [2024-11-19 02:02:22.661184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:45.802 [2024-11-19 02:02:22.661202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:92960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.802 [2024-11-19 02:02:22.661215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:45.802 [2024-11-19 02:02:22.661233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:92968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.802 [2024-11-19 02:02:22.661245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:45.802 [2024-11-19 02:02:22.661263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:92976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.802 [2024-11-19 02:02:22.661276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:45.802 [2024-11-19 02:02:22.661294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:92984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.802 [2024-11-19 02:02:22.661307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:45.802 [2024-11-19 02:02:22.661325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:92992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.802 [2024-11-19 02:02:22.661338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:45.802 [2024-11-19 02:02:22.661478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.802 [2024-11-19 02:02:22.661497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:45.802 [2024-11-19 02:02:22.661549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:93392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.802 [2024-11-19 02:02:22.661567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:45.802 [2024-11-19 02:02:22.661587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:93400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.802 [2024-11-19 02:02:22.661601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:45.802 [2024-11-19 02:02:22.661620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:93408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.802 [2024-11-19 02:02:22.661633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:45.802 [2024-11-19 02:02:22.661652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.802 [2024-11-19 02:02:22.661666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:45.802 [2024-11-19 02:02:22.661685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:93424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.802 [2024-11-19 02:02:22.661709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:45.802 [2024-11-19 02:02:22.661730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:93432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.802 [2024-11-19 02:02:22.661744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:45.802 [2024-11-19 02:02:22.661763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:93440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.802 [2024-11-19 02:02:22.661777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:45.802 [2024-11-19 02:02:22.661814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:93448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.802 [2024-11-19 02:02:22.661832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:45.802 [2024-11-19 02:02:22.661853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:93456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.802 [2024-11-19 02:02:22.661867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:45.802 [2024-11-19 02:02:22.661886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:93464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.802 [2024-11-19 02:02:22.661912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:45.802 [2024-11-19 02:02:22.661931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:93472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.802 [2024-11-19 02:02:22.661965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:45.802 [2024-11-19 02:02:22.662005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.802 [2024-11-19 02:02:22.662020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:45.802 [2024-11-19 02:02:22.662041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:93488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.802 [2024-11-19 02:02:22.662055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:45.802 [2024-11-19 02:02:22.662075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:93496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.802 [2024-11-19 02:02:22.662089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:45.802 [2024-11-19 02:02:22.662108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:93504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.802 [2024-11-19 02:02:22.662122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:45.802 [2024-11-19 02:02:22.662142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:93512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.802 [2024-11-19 02:02:22.662156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:45.802 [2024-11-19 02:02:22.662193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:93520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.802 [2024-11-19 02:02:22.662217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:45.802 [2024-11-19 02:02:22.662239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:93528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.802 [2024-11-19 02:02:22.662254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:45.802 [2024-11-19 02:02:22.662289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.803 [2024-11-19 02:02:22.662303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:45.803 [2024-11-19 02:02:22.662322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:93544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.803 [2024-11-19 02:02:22.662351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:45.803 [2024-11-19 02:02:22.662371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:93552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.803 [2024-11-19 02:02:22.662384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:45.803 [2024-11-19 02:02:22.662403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:93560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.803 [2024-11-19 02:02:22.662417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:45.803 [2024-11-19 02:02:22.662436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:93568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.803 [2024-11-19 02:02:22.662449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:45.803 [2024-11-19 02:02:22.662468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:93000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.803 [2024-11-19 02:02:22.662482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:45.803 [2024-11-19 02:02:22.662501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.803 [2024-11-19 02:02:22.662515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:45.803 [2024-11-19 02:02:22.662534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.803 [2024-11-19 02:02:22.662562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:45.803 [2024-11-19 02:02:22.662595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:93024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.803 [2024-11-19 02:02:22.662613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:45.803 [2024-11-19 02:02:22.662634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:93032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.803 [2024-11-19 02:02:22.662648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:45.803 [2024-11-19 02:02:22.662669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:93040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.803 [2024-11-19 02:02:22.662683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:45.803 [2024-11-19 02:02:22.662711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:93048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.803 [2024-11-19 02:02:22.662726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:45.803 [2024-11-19 02:02:22.662746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.803 [2024-11-19 02:02:22.662761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:45.803 [2024-11-19 02:02:22.662781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:93576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.803 [2024-11-19 02:02:22.662795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.803 [2024-11-19 02:02:22.662831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:93584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.803 [2024-11-19 02:02:22.662845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:45.803 [2024-11-19 02:02:22.662865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:93592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.803 [2024-11-19 02:02:22.662879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:45.803 [2024-11-19 02:02:22.662900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:93600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.803 [2024-11-19 02:02:22.662914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:45.803 [2024-11-19 02:02:22.662948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:93608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.803 [2024-11-19 02:02:22.662962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:45.803 [2024-11-19 02:02:22.662982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:93616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.803 [2024-11-19 02:02:22.662996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:45.803 [2024-11-19 02:02:22.663015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.803 [2024-11-19 02:02:22.663029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:45.803 [2024-11-19 02:02:22.663049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:93632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.803 [2024-11-19 02:02:22.663062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:45.803 [2024-11-19 02:02:22.663082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:93640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.803 [2024-11-19 02:02:22.663096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:45.803 [2024-11-19 02:02:22.663115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:93648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.803 [2024-11-19 02:02:22.663129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:45.803 [2024-11-19 02:02:22.663168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:93656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.803 [2024-11-19 02:02:22.663183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:45.803 [2024-11-19 02:02:22.663202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:93664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.803 [2024-11-19 02:02:22.663216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:45.803 [2024-11-19 02:02:22.663235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:93672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.803 [2024-11-19 02:02:22.663249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:45.803 [2024-11-19 02:02:22.663268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:93680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.803 [2024-11-19 02:02:22.663282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:45.803 [2024-11-19 02:02:22.663301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:93688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.803 [2024-11-19 02:02:22.663314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:45.803 [2024-11-19 02:02:22.663334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:93696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.803 [2024-11-19 02:02:22.663348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:45.803 [2024-11-19 02:02:22.663367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.803 [2024-11-19 02:02:22.663380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:45.803 [2024-11-19 02:02:22.663400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.803 [2024-11-19 02:02:22.663413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:45.803 [2024-11-19 02:02:22.663432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.803 [2024-11-19 02:02:22.663445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:45.803 [2024-11-19 02:02:22.663465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:93088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.803 [2024-11-19 02:02:22.663478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:45.803 [2024-11-19 02:02:22.663498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:93096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.804 [2024-11-19 02:02:22.663511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:45.804 [2024-11-19 02:02:22.663530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.804 [2024-11-19 02:02:22.663558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:45.804 [2024-11-19 02:02:22.663597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:93112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.804 [2024-11-19 02:02:22.663619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:45.804 [2024-11-19 02:02:22.663640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:93120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.804 [2024-11-19 02:02:22.663655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:45.804 [2024-11-19 02:02:22.663692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:93704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.804 [2024-11-19 02:02:22.663711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:45.804 [2024-11-19 02:02:22.663732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:93712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.804 [2024-11-19 02:02:22.663746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:45.804 [2024-11-19 02:02:22.663766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:93720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.804 [2024-11-19 02:02:22.663780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:45.804 [2024-11-19 02:02:22.663800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:93728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.804 [2024-11-19 02:02:22.663814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:45.804 [2024-11-19 02:02:22.663834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:93736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.804 [2024-11-19 02:02:22.663848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:45.804 [2024-11-19 02:02:22.663868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:93744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.804 [2024-11-19 02:02:22.663882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:45.804 [2024-11-19 02:02:22.663901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:93752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.804 [2024-11-19 02:02:22.663916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:45.804 [2024-11-19 02:02:22.663936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:93760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.804 [2024-11-19 02:02:22.663950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:45.804 [2024-11-19 02:02:22.663985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.804 [2024-11-19 02:02:22.663998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:45.804 [2024-11-19 02:02:22.664018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:93776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.804 [2024-11-19 02:02:22.664032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:45.804 [2024-11-19 02:02:22.664051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:93784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.804 [2024-11-19 02:02:22.664071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:45.804 [2024-11-19 02:02:22.664092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:93792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.804 [2024-11-19 02:02:22.664106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:45.804 [2024-11-19 02:02:22.664126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:93800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.804 [2024-11-19 02:02:22.664139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:45.804 [2024-11-19 02:02:22.664158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:93808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.804 [2024-11-19 02:02:22.664172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:45.804 [2024-11-19 02:02:22.664190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:93816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.804 [2024-11-19 02:02:22.664204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:45.804 [2024-11-19 02:02:22.664223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:93824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.804 [2024-11-19 02:02:22.664236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:45.804 [2024-11-19 02:02:22.664255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:93128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.804 [2024-11-19 02:02:22.664269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:45.804 [2024-11-19 02:02:22.664288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.804 [2024-11-19 02:02:22.664301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:45.804 [2024-11-19 02:02:22.664321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:93144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.804 [2024-11-19 02:02:22.664334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:45.804 [2024-11-19 02:02:22.664354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:93152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.804 [2024-11-19 02:02:22.664368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:45.804 [2024-11-19 02:02:22.664387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.804 [2024-11-19 02:02:22.664400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:45.804 [2024-11-19 02:02:22.664420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:93168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.804 [2024-11-19 02:02:22.664433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:45.804 [2024-11-19 02:02:22.664453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:93176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.805 [2024-11-19 02:02:22.664466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:45.805 [2024-11-19 02:02:22.664493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:93184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.805 [2024-11-19 02:02:22.664508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:45.805 [2024-11-19 02:02:22.664538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:93192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.805 [2024-11-19 02:02:22.664555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:45.805 [2024-11-19 02:02:22.664575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.805 [2024-11-19 02:02:22.664590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:45.805 [2024-11-19 02:02:22.664609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.805 [2024-11-19 02:02:22.664623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:45.805 [2024-11-19 02:02:22.664642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:93216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.805 [2024-11-19 02:02:22.664655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:45.805 [2024-11-19 02:02:22.664674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:93224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.805 [2024-11-19 02:02:22.664688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:45.805 [2024-11-19 02:02:22.664707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:93232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.805 [2024-11-19 02:02:22.664721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:45.805 [2024-11-19 02:02:22.664740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:93240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.805 [2024-11-19 02:02:22.664754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:45.805 [2024-11-19 02:02:22.664773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:93248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.805 [2024-11-19 02:02:22.664786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:45.805 [2024-11-19 02:02:22.664805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:93256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.805 [2024-11-19 02:02:22.664819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:45.805 [2024-11-19 02:02:22.664838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:93264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.805 [2024-11-19 02:02:22.664851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:45.805 [2024-11-19 02:02:22.664871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.805 [2024-11-19 02:02:22.664884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:45.805 [2024-11-19 02:02:22.664911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.805 [2024-11-19 02:02:22.664926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:45.805 [2024-11-19 02:02:22.664946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:93288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.805 [2024-11-19 02:02:22.664960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:45.805 [2024-11-19 02:02:22.664979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.805 [2024-11-19 02:02:22.664993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.805 [2024-11-19 02:02:22.665012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.805 [2024-11-19 02:02:22.665026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.805 [2024-11-19 02:02:22.665617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.805 [2024-11-19 02:02:22.665643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:45.805 [2024-11-19 02:02:22.665674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:93832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.805 [2024-11-19 02:02:22.665690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:45.805 [2024-11-19 02:02:22.665715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:93840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.805 [2024-11-19 02:02:22.665730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:45.805 [2024-11-19 02:02:22.665755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.805 [2024-11-19 02:02:22.665769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:45.805 [2024-11-19 02:02:22.665796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:93856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.805 [2024-11-19 02:02:22.665809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:45.805 [2024-11-19 02:02:22.665834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:93864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.805 [2024-11-19 02:02:22.665848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:45.805 [2024-11-19 02:02:22.665873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:93872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.805 [2024-11-19 02:02:22.665886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:45.805 [2024-11-19 02:02:22.665911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:93880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.805 [2024-11-19 02:02:22.665925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:45.805 [2024-11-19 02:02:22.666008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:93888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.805 [2024-11-19 02:02:22.666041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:45.805 [2024-11-19 02:02:22.666072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:93896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.805 [2024-11-19 02:02:22.666088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:45.805 [2024-11-19 02:02:22.666116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:93904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.805 [2024-11-19 02:02:22.666131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:45.805 [2024-11-19 02:02:22.666158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:93912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.805 [2024-11-19 02:02:22.666173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:45.805 [2024-11-19 02:02:22.666201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:93920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.805 [2024-11-19 02:02:22.666216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:45.805 9464.77 IOPS, 36.97 MiB/s [2024-11-19T02:02:56.420Z] 9053.26 IOPS, 35.36 MiB/s [2024-11-19T02:02:56.420Z] 8676.04 IOPS, 33.89 MiB/s [2024-11-19T02:02:56.420Z] 8329.00 IOPS, 32.54 MiB/s [2024-11-19T02:02:56.420Z] 8008.65 IOPS, 31.28 MiB/s [2024-11-19T02:02:56.421Z] 7712.04 IOPS, 30.13 MiB/s [2024-11-19T02:02:56.421Z] 7436.61 IOPS, 29.05 MiB/s [2024-11-19T02:02:56.421Z] 7290.86 IOPS, 28.48 MiB/s [2024-11-19T02:02:56.421Z] 7380.63 IOPS, 28.83 MiB/s [2024-11-19T02:02:56.421Z] 7465.00 IOPS, 29.16 MiB/s [2024-11-19T02:02:56.421Z] 7544.59 IOPS, 29.47 MiB/s [2024-11-19T02:02:56.421Z] 7618.73 IOPS, 29.76 MiB/s [2024-11-19T02:02:56.421Z] 7690.88 IOPS, 30.04 MiB/s [2024-11-19T02:02:56.421Z] [2024-11-19 02:02:35.978564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.806 [2024-11-19 02:02:35.978626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:45.806 [2024-11-19 02:02:35.978696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.806 [2024-11-19 02:02:35.978716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:45.806 [2024-11-19 02:02:35.978737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.806 [2024-11-19 02:02:35.978751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:45.806 [2024-11-19 02:02:35.978770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.806 [2024-11-19 02:02:35.978784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:45.806 [2024-11-19 02:02:35.978802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.806 [2024-11-19 02:02:35.978816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:45.806 [2024-11-19 02:02:35.978835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.806 [2024-11-19 02:02:35.978848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:45.806 [2024-11-19 02:02:35.978887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.806 [2024-11-19 02:02:35.978918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:45.806 [2024-11-19 02:02:35.978937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.806 [2024-11-19 02:02:35.978950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:45.806 [2024-11-19 02:02:35.978968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.806 [2024-11-19 02:02:35.978981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:45.806 [2024-11-19 02:02:35.978999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.806 [2024-11-19 02:02:35.979013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:45.806 [2024-11-19 02:02:35.979031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.806 [2024-11-19 02:02:35.979044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:45.806 [2024-11-19 02:02:35.979062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.806 [2024-11-19 02:02:35.979075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:45.806 [2024-11-19 02:02:35.979093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.806 [2024-11-19 02:02:35.979106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:45.806 [2024-11-19 02:02:35.979125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:71064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.806 [2024-11-19 02:02:35.979138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:45.806 [2024-11-19 02:02:35.979157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:71072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.806 [2024-11-19 02:02:35.979170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:45.806 [2024-11-19 02:02:35.979189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:71080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.806 [2024-11-19 02:02:35.979202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:45.806 [2024-11-19 02:02:35.979221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.806 [2024-11-19 02:02:35.979234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:45.806 [2024-11-19 02:02:35.979253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.806 [2024-11-19 02:02:35.979267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:45.806 [2024-11-19 02:02:35.979285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.806 [2024-11-19 02:02:35.979309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:45.806 [2024-11-19 02:02:35.979330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:71112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.806 [2024-11-19 02:02:35.979344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:45.806 [2024-11-19 02:02:35.979362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.806 [2024-11-19 02:02:35.979376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:45.806 [2024-11-19 02:02:35.979395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.806 [2024-11-19 02:02:35.979408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:45.806 [2024-11-19 02:02:35.979427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.806 [2024-11-19 02:02:35.979441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:45.806 [2024-11-19 02:02:35.979460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.806 [2024-11-19 02:02:35.979473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:45.806 [2024-11-19 02:02:35.979519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.806 [2024-11-19 02:02:35.979554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.806 [2024-11-19 02:02:35.979571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.806 [2024-11-19 02:02:35.979584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.806 [2024-11-19 02:02:35.979597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.806 [2024-11-19 02:02:35.979609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.806 [2024-11-19 02:02:35.979622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.806 [2024-11-19 02:02:35.979635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.806 [2024-11-19 02:02:35.979648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.806 [2024-11-19 02:02:35.979660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.806 [2024-11-19 02:02:35.979673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.806 [2024-11-19 02:02:35.979685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.806 [2024-11-19 02:02:35.979699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.807 [2024-11-19 02:02:35.979712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.807 [2024-11-19 02:02:35.979734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.807 [2024-11-19 02:02:35.979747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.807 [2024-11-19 02:02:35.979760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.807 [2024-11-19 02:02:35.979772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.807 [2024-11-19 02:02:35.979787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.807 [2024-11-19 02:02:35.979799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.807 [2024-11-19 02:02:35.979812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.807 [2024-11-19 02:02:35.979824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.807 [2024-11-19 02:02:35.979838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.807 [2024-11-19 02:02:35.979850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.807 [2024-11-19 02:02:35.979880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.807 [2024-11-19 02:02:35.979893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.807 [2024-11-19 02:02:35.979906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.807 [2024-11-19 02:02:35.979919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.807 [2024-11-19 02:02:35.979932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.807 [2024-11-19 02:02:35.979945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.807 [2024-11-19 02:02:35.979959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.807 [2024-11-19 02:02:35.979971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.807 [2024-11-19 02:02:35.979985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.807 [2024-11-19 02:02:35.979997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.807 [2024-11-19 02:02:35.980011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.807 [2024-11-19 02:02:35.980023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.807 [2024-11-19 02:02:35.980037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:71200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.807 [2024-11-19 02:02:35.980049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.807 [2024-11-19 02:02:35.980063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:71208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.807 [2024-11-19 02:02:35.980081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.807 [2024-11-19 02:02:35.980095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.807 [2024-11-19 02:02:35.980108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.807 [2024-11-19 02:02:35.980122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.807 [2024-11-19 02:02:35.980134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.807 [2024-11-19 02:02:35.980148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:71232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.807 [2024-11-19 02:02:35.980160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.807 [2024-11-19 02:02:35.980175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:71240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.807 [2024-11-19 02:02:35.980187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.807 [2024-11-19 02:02:35.980201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:71248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.807 [2024-11-19 02:02:35.980213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.807 [2024-11-19 02:02:35.980227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:71256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.807 [2024-11-19 02:02:35.980239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.807 [2024-11-19 02:02:35.980253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.807 [2024-11-19 02:02:35.980265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.807 [2024-11-19 02:02:35.980278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:71272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.807 [2024-11-19 02:02:35.980291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.807 [2024-11-19 02:02:35.980304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:71280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.807 [2024-11-19 02:02:35.980316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.807 [2024-11-19 02:02:35.980330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:71288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.807 [2024-11-19 02:02:35.980342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.807 [2024-11-19 02:02:35.980355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:71296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.807 [2024-11-19 02:02:35.980368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.807 [2024-11-19 02:02:35.980382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:71304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.807 [2024-11-19 02:02:35.980394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.807 [2024-11-19 02:02:35.980413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.807 [2024-11-19 02:02:35.980426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.807 [2024-11-19 02:02:35.980440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.807 [2024-11-19 02:02:35.980452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.807 [2024-11-19 02:02:35.980465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.807 [2024-11-19 02:02:35.980478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.807 [2024-11-19 02:02:35.980491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.807 [2024-11-19 02:02:35.980503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.807 [2024-11-19 02:02:35.980527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.807 [2024-11-19 02:02:35.980543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.807 [2024-11-19 02:02:35.980556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.807 [2024-11-19 02:02:35.980569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.807 [2024-11-19 02:02:35.980582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.807 [2024-11-19 02:02:35.980595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.807 [2024-11-19 02:02:35.980609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.807 [2024-11-19 02:02:35.980621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.807 [2024-11-19 02:02:35.980635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:71312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.807 [2024-11-19 02:02:35.980647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.807 [2024-11-19 02:02:35.980661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.807 [2024-11-19 02:02:35.980673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.807 [2024-11-19 02:02:35.980687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.807 [2024-11-19 02:02:35.980699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.808 [2024-11-19 02:02:35.980713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.808 [2024-11-19 02:02:35.980725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.808 [2024-11-19 02:02:35.980739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:71344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.808 [2024-11-19 02:02:35.980757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.808 [2024-11-19 02:02:35.980771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.808 [2024-11-19 02:02:35.980784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.808 [2024-11-19 02:02:35.980798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:71360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.808 [2024-11-19 02:02:35.980810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.808 [2024-11-19 02:02:35.980823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.808 [2024-11-19 02:02:35.980836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.808 [2024-11-19 02:02:35.980850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:71376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.808 [2024-11-19 02:02:35.980862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.808 [2024-11-19 02:02:35.980877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:71384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.808 [2024-11-19 02:02:35.980889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.808 [2024-11-19 02:02:35.980903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:71392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.808 [2024-11-19 02:02:35.980915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.808 [2024-11-19 02:02:35.980929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:71400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.808 [2024-11-19 02:02:35.980941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.808 [2024-11-19 02:02:35.980954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:71408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.808 [2024-11-19 02:02:35.980966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.808 [2024-11-19 02:02:35.980980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:71416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.808 [2024-11-19 02:02:35.980992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.808 [2024-11-19 02:02:35.981005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:71424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.808 [2024-11-19 02:02:35.981018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.808 [2024-11-19 02:02:35.981031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:71432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.808 [2024-11-19 02:02:35.981044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.808 [2024-11-19 02:02:35.981058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.808 [2024-11-19 02:02:35.981070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.808 [2024-11-19 02:02:35.981084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.808 [2024-11-19 02:02:35.981101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.808 [2024-11-19 02:02:35.981116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.808 [2024-11-19 02:02:35.981128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.808 [2024-11-19 02:02:35.981142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.808 [2024-11-19 02:02:35.981154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.808 [2024-11-19 02:02:35.981168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.808 [2024-11-19 02:02:35.981180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.808 [2024-11-19 02:02:35.981193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.808 [2024-11-19 02:02:35.981206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.808 [2024-11-19 02:02:35.981219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.808 [2024-11-19 02:02:35.981231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.808 [2024-11-19 02:02:35.981245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.808 [2024-11-19 02:02:35.981257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.808 [2024-11-19 02:02:35.981270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:71440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.808 [2024-11-19 02:02:35.981283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.808 [2024-11-19 02:02:35.981303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:71448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.808 [2024-11-19 02:02:35.981316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.808 [2024-11-19 02:02:35.981330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:71456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.808 [2024-11-19 02:02:35.981343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.808 [2024-11-19 02:02:35.981356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:71464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.808 [2024-11-19 02:02:35.981369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.808 [2024-11-19 02:02:35.981382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:71472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.808 [2024-11-19 02:02:35.981394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.808 [2024-11-19 02:02:35.981408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:71480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.808 [2024-11-19 02:02:35.981420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.808 [2024-11-19 02:02:35.981440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:71488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.808 [2024-11-19 02:02:35.981453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.808 [2024-11-19 02:02:35.981470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:71496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.808 [2024-11-19 02:02:35.981483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.808 [2024-11-19 02:02:35.981508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:71504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.808 [2024-11-19 02:02:35.981523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.808 [2024-11-19 02:02:35.981537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.808 [2024-11-19 02:02:35.981550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.808 [2024-11-19 02:02:35.981563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:71520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.808 [2024-11-19 02:02:35.981576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.808 [2024-11-19 02:02:35.981589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:71528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.808 [2024-11-19 02:02:35.981601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.808 [2024-11-19 02:02:35.981615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:71536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.809 [2024-11-19 02:02:35.981627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.809 [2024-11-19 02:02:35.981640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.809 [2024-11-19 02:02:35.981653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.809 [2024-11-19 02:02:35.981666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.809 [2024-11-19 02:02:35.981678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.809 [2024-11-19 02:02:35.981691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa81ff0 is same with the state(6) to be set 00:21:45.809 [2024-11-19 02:02:35.981707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:45.809 [2024-11-19 02:02:35.981717] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:45.809 [2024-11-19 02:02:35.981726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71560 len:8 PRP1 0x0 PRP2 0x0 00:21:45.809 [2024-11-19 02:02:35.981740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.809 [2024-11-19 02:02:35.981753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:45.809 [2024-11-19 02:02:35.981762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:45.809 [2024-11-19 02:02:35.981772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71888 len:8 PRP1 0x0 PRP2 0x0 00:21:45.809 [2024-11-19 02:02:35.981783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.809 [2024-11-19 02:02:35.981801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:45.809 [2024-11-19 02:02:35.981811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:45.809 [2024-11-19 02:02:35.981820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71896 len:8 PRP1 0x0 PRP2 0x0 00:21:45.809 [2024-11-19 02:02:35.981831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.809 [2024-11-19 02:02:35.981843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:45.809 [2024-11-19 02:02:35.981852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:45.809 [2024-11-19 02:02:35.981861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71904 len:8 PRP1 0x0 PRP2 0x0 00:21:45.809 [2024-11-19 02:02:35.981874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.809 [2024-11-19 02:02:35.981886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:45.809 [2024-11-19 02:02:35.981894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:45.809 [2024-11-19 02:02:35.981903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71912 len:8 PRP1 0x0 PRP2 0x0 00:21:45.809 [2024-11-19 02:02:35.981915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.809 [2024-11-19 02:02:35.981926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:45.809 [2024-11-19 02:02:35.981935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:45.809 [2024-11-19 02:02:35.981944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71920 len:8 PRP1 0x0 PRP2 0x0 00:21:45.809 [2024-11-19 02:02:35.981984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.809 [2024-11-19 02:02:35.981997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:45.809 [2024-11-19 02:02:35.982007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:45.809 [2024-11-19 02:02:35.982017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71928 len:8 PRP1 0x0 PRP2 0x0 00:21:45.809 [2024-11-19 02:02:35.982029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.809 [2024-11-19 02:02:35.982042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:45.809 [2024-11-19 02:02:35.982051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:45.809 [2024-11-19 02:02:35.982060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71936 len:8 PRP1 0x0 PRP2 0x0 00:21:45.809 [2024-11-19 02:02:35.982073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.809 [2024-11-19 02:02:35.982085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:45.809 [2024-11-19 02:02:35.982094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:45.809 [2024-11-19 02:02:35.982104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71944 len:8 PRP1 0x0 PRP2 0x0 00:21:45.809 [2024-11-19 02:02:35.982118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.809 [2024-11-19 02:02:35.982131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:45.809 [2024-11-19 02:02:35.982140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:45.809 [2024-11-19 02:02:35.982150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71952 len:8 PRP1 0x0 PRP2 0x0 00:21:45.809 [2024-11-19 02:02:35.982168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.809 [2024-11-19 02:02:35.982181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:45.809 [2024-11-19 02:02:35.982191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:45.809 [2024-11-19 02:02:35.982200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71960 len:8 PRP1 0x0 PRP2 0x0 00:21:45.809 [2024-11-19 02:02:35.982212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.809 [2024-11-19 02:02:35.982225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:45.809 [2024-11-19 02:02:35.982234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:45.809 [2024-11-19 02:02:35.982244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71968 len:8 PRP1 0x0 PRP2 0x0 00:21:45.809 [2024-11-19 02:02:35.982257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.809 [2024-11-19 02:02:35.982270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:45.809 [2024-11-19 02:02:35.982294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:45.809 [2024-11-19 02:02:35.982303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71976 len:8 PRP1 0x0 PRP2 0x0 00:21:45.809 [2024-11-19 02:02:35.982314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.809 [2024-11-19 02:02:35.982341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:45.809 [2024-11-19 02:02:35.982350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:45.809 [2024-11-19 02:02:35.982359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71984 len:8 PRP1 0x0 PRP2 0x0 00:21:45.809 [2024-11-19 02:02:35.982370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.809 [2024-11-19 02:02:35.982381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:45.809 [2024-11-19 02:02:35.982390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:45.809 [2024-11-19 02:02:35.982399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71992 len:8 PRP1 0x0 PRP2 0x0 00:21:45.809 [2024-11-19 02:02:35.982410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.810 [2024-11-19 02:02:35.982422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:45.810 [2024-11-19 02:02:35.982431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:45.810 [2024-11-19 02:02:35.982440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72000 len:8 PRP1 0x0 PRP2 0x0 00:21:45.810 [2024-11-19 02:02:35.982451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.810 [2024-11-19 02:02:35.982463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:45.810 [2024-11-19 02:02:35.982471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:45.810 [2024-11-19 02:02:35.982480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72008 len:8 PRP1 0x0 PRP2 0x0 00:21:45.810 [2024-11-19 02:02:35.982526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.810 [2024-11-19 02:02:35.982545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:45.810 [2024-11-19 02:02:35.982561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:45.810 [2024-11-19 02:02:35.982571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72016 len:8 PRP1 0x0 PRP2 0x0 00:21:45.810 [2024-11-19 02:02:35.982583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.810 [2024-11-19 02:02:35.982595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:45.810 [2024-11-19 02:02:35.982604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:45.810 [2024-11-19 02:02:35.982613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72024 len:8 PRP1 0x0 PRP2 0x0 00:21:45.810 [2024-11-19 02:02:35.982625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.810 [2024-11-19 02:02:35.982637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:45.810 [2024-11-19 02:02:35.982646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:45.810 [2024-11-19 02:02:35.982656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72032 len:8 PRP1 0x0 PRP2 0x0 00:21:45.810 [2024-11-19 02:02:35.982668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.810 [2024-11-19 02:02:35.982681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:45.810 [2024-11-19 02:02:35.982689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:45.810 [2024-11-19 02:02:35.982698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72040 len:8 PRP1 0x0 PRP2 0x0 00:21:45.810 [2024-11-19 02:02:35.982710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.810 [2024-11-19 02:02:35.982722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:45.810 [2024-11-19 02:02:35.982731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:45.810 [2024-11-19 02:02:35.982740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72048 len:8 PRP1 0x0 PRP2 0x0 00:21:45.810 [2024-11-19 02:02:35.982752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.810 [2024-11-19 02:02:35.982764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:45.810 [2024-11-19 02:02:35.982773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:45.810 [2024-11-19 02:02:35.982782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72056 len:8 PRP1 0x0 PRP2 0x0 00:21:45.810 [2024-11-19 02:02:35.982793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.810 [2024-11-19 02:02:35.982805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:45.810 [2024-11-19 02:02:35.982814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:45.810 [2024-11-19 02:02:35.982823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72064 len:8 PRP1 0x0 PRP2 0x0 00:21:45.810 [2024-11-19 02:02:35.982835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.810 [2024-11-19 02:02:35.982847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:45.810 [2024-11-19 02:02:35.982856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:45.810 [2024-11-19 02:02:35.982865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72072 len:8 PRP1 0x0 PRP2 0x0 00:21:45.810 [2024-11-19 02:02:35.982879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.810 [2024-11-19 02:02:35.983039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.810 [2024-11-19 02:02:35.983065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.810 [2024-11-19 02:02:35.983079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.810 [2024-11-19 02:02:35.983092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.810 [2024-11-19 02:02:35.983104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.810 [2024-11-19 02:02:35.983116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.810 [2024-11-19 02:02:35.983129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.810 [2024-11-19 02:02:35.983140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.810 [2024-11-19 02:02:35.983154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.810 [2024-11-19 02:02:35.983166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.810 [2024-11-19 02:02:35.983183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4d410 is same with the state(6) to be set 00:21:45.810 [2024-11-19 02:02:35.984134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:21:45.810 [2024-11-19 02:02:35.984171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa4d410 (9): Bad file descriptor 00:21:45.810 [2024-11-19 02:02:35.984526] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:45.810 [2024-11-19 02:02:35.984557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa4d410 with addr=10.0.0.3, port=4421 00:21:45.810 [2024-11-19 02:02:35.984572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa4d410 is same with the state(6) to be set 00:21:45.810 [2024-11-19 02:02:35.984604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa4d410 (9): Bad file descriptor 00:21:45.810 [2024-11-19 02:02:35.984632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:21:45.810 [2024-11-19 02:02:35.984647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:21:45.810 [2024-11-19 02:02:35.984660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:21:45.810 [2024-11-19 02:02:35.984672] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:21:45.810 [2024-11-19 02:02:35.984685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:21:45.810 7749.80 IOPS, 30.27 MiB/s [2024-11-19T02:02:56.425Z] 7813.19 IOPS, 30.52 MiB/s [2024-11-19T02:02:56.425Z] 7864.08 IOPS, 30.72 MiB/s [2024-11-19T02:02:56.425Z] 7919.87 IOPS, 30.94 MiB/s [2024-11-19T02:02:56.425Z] 7972.79 IOPS, 31.14 MiB/s [2024-11-19T02:02:56.425Z] 8025.57 IOPS, 31.35 MiB/s [2024-11-19T02:02:56.425Z] 8075.78 IOPS, 31.55 MiB/s [2024-11-19T02:02:56.425Z] 8121.21 IOPS, 31.72 MiB/s [2024-11-19T02:02:56.425Z] 8158.95 IOPS, 31.87 MiB/s [2024-11-19T02:02:56.425Z] 8201.34 IOPS, 32.04 MiB/s [2024-11-19T02:02:56.425Z] 8240.78 IOPS, 32.19 MiB/s [2024-11-19T02:02:56.425Z] [2024-11-19 02:02:46.041515] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:21:45.810 8282.46 IOPS, 32.35 MiB/s [2024-11-19T02:02:56.425Z] 8320.36 IOPS, 32.50 MiB/s [2024-11-19T02:02:56.425Z] 8357.33 IOPS, 32.65 MiB/s [2024-11-19T02:02:56.425Z] 8388.98 IOPS, 32.77 MiB/s [2024-11-19T02:02:56.425Z] 8416.66 IOPS, 32.88 MiB/s [2024-11-19T02:02:56.425Z] 8447.08 IOPS, 33.00 MiB/s [2024-11-19T02:02:56.425Z] 8479.40 IOPS, 33.12 MiB/s [2024-11-19T02:02:56.425Z] 8507.49 IOPS, 33.23 MiB/s [2024-11-19T02:02:56.425Z] 8534.02 IOPS, 33.34 MiB/s [2024-11-19T02:02:56.425Z] 8558.85 IOPS, 33.43 MiB/s [2024-11-19T02:02:56.425Z] Received shutdown signal, test time was about 55.072810 seconds 00:21:45.810 00:21:45.810 Latency(us) 00:21:45.810 [2024-11-19T02:02:56.425Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.810 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:45.811 Verification LBA range: start 0x0 length 0x4000 00:21:45.811 Nvme0n1 : 55.07 8557.96 33.43 0.00 0.00 14927.03 203.87 7046430.72 00:21:45.811 [2024-11-19T02:02:56.426Z] =================================================================================================================== 00:21:45.811 [2024-11-19T02:02:56.426Z] Total : 8557.96 33.43 0.00 0.00 14927.03 203.87 7046430.72 00:21:45.811 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:46.070 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:21:46.070 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:46.070 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:21:46.070 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:46.070 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:21:46.070 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:46.070 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:21:46.070 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:46.070 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:46.070 rmmod nvme_tcp 00:21:46.070 rmmod nvme_fabrics 00:21:46.070 rmmod nvme_keyring 00:21:46.070 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:46.070 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:21:46.070 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:21:46.070 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 95264 ']' 00:21:46.070 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 95264 00:21:46.070 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 95264 ']' 00:21:46.070 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 95264 00:21:46.070 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:21:46.070 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:46.070 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95264 00:21:46.070 killing process with pid 95264 00:21:46.070 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:46.070 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:46.070 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95264' 00:21:46.070 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 95264 00:21:46.070 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 95264 00:21:46.329 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:46.329 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:46.329 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:46.329 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:21:46.329 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:21:46.329 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:46.329 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:21:46.329 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:46.329 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:46.329 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:46.329 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:46.329 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:46.329 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:46.329 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:46.329 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:46.329 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:46.329 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:46.329 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:46.329 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:46.329 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:46.329 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:46.329 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:46.329 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:46.329 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.329 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:46.329 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.329 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:21:46.330 00:21:46.330 real 1m1.090s 00:21:46.330 user 2m49.506s 00:21:46.330 sys 0m17.371s 00:21:46.330 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:46.330 02:02:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:46.330 ************************************ 00:21:46.330 END TEST nvmf_host_multipath 00:21:46.330 ************************************ 00:21:46.590 02:02:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:46.590 02:02:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:46.590 02:02:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:46.590 02:02:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.590 ************************************ 00:21:46.590 START TEST nvmf_timeout 00:21:46.590 ************************************ 00:21:46.590 02:02:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:46.590 * Looking for test storage... 00:21:46.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:46.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.590 --rc genhtml_branch_coverage=1 00:21:46.590 --rc genhtml_function_coverage=1 00:21:46.590 --rc genhtml_legend=1 00:21:46.590 --rc geninfo_all_blocks=1 00:21:46.590 --rc geninfo_unexecuted_blocks=1 00:21:46.590 00:21:46.590 ' 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:46.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.590 --rc genhtml_branch_coverage=1 00:21:46.590 --rc genhtml_function_coverage=1 00:21:46.590 --rc genhtml_legend=1 00:21:46.590 --rc geninfo_all_blocks=1 00:21:46.590 --rc geninfo_unexecuted_blocks=1 00:21:46.590 00:21:46.590 ' 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:46.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.590 --rc genhtml_branch_coverage=1 00:21:46.590 --rc genhtml_function_coverage=1 00:21:46.590 --rc genhtml_legend=1 00:21:46.590 --rc geninfo_all_blocks=1 00:21:46.590 --rc geninfo_unexecuted_blocks=1 00:21:46.590 00:21:46.590 ' 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:46.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.590 --rc genhtml_branch_coverage=1 00:21:46.590 --rc genhtml_function_coverage=1 00:21:46.590 --rc genhtml_legend=1 00:21:46.590 --rc geninfo_all_blocks=1 00:21:46.590 --rc geninfo_unexecuted_blocks=1 00:21:46.590 00:21:46.590 ' 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:46.590 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:46.591 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.591 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.591 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:46.591 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:46.591 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:46.591 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:46.591 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:46.591 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:46.591 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:46.591 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:46.591 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:46.591 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:46.591 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:21:46.591 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:46.591 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:46.591 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:46.591 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:46.591 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:46.591 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.591 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:46.591 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.591 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:46.591 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:46.591 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:46.591 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:46.591 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:46.591 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:46.591 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:46.591 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:46.591 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:46.591 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:46.591 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:46.591 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:46.850 Cannot find device "nvmf_init_br" 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:46.850 Cannot find device "nvmf_init_br2" 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:46.850 Cannot find device "nvmf_tgt_br" 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:46.850 Cannot find device "nvmf_tgt_br2" 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:46.850 Cannot find device "nvmf_init_br" 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:46.850 Cannot find device "nvmf_init_br2" 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:46.850 Cannot find device "nvmf_tgt_br" 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:46.850 Cannot find device "nvmf_tgt_br2" 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:46.850 Cannot find device "nvmf_br" 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:46.850 Cannot find device "nvmf_init_if" 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:46.850 Cannot find device "nvmf_init_if2" 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:46.850 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:46.850 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:46.850 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:46.851 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:46.851 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:46.851 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:46.851 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:46.851 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:46.851 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:46.851 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:47.110 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:47.110 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:21:47.110 00:21:47.110 --- 10.0.0.3 ping statistics --- 00:21:47.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.110 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:47.110 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:47.110 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:21:47.110 00:21:47.110 --- 10.0.0.4 ping statistics --- 00:21:47.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.110 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:47.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:47.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:21:47.110 00:21:47.110 --- 10.0.0.1 ping statistics --- 00:21:47.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.110 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:47.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:47.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:21:47.110 00:21:47.110 --- 10.0.0.2 ping statistics --- 00:21:47.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.110 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=96485 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 96485 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 96485 ']' 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:47.110 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:47.110 [2024-11-19 02:02:57.648238] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:21:47.110 [2024-11-19 02:02:57.648330] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:47.372 [2024-11-19 02:02:57.793899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:47.372 [2024-11-19 02:02:57.812663] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.372 [2024-11-19 02:02:57.812729] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.372 [2024-11-19 02:02:57.812754] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:47.372 [2024-11-19 02:02:57.812762] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:47.372 [2024-11-19 02:02:57.812768] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.372 [2024-11-19 02:02:57.813517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.372 [2024-11-19 02:02:57.813523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.372 [2024-11-19 02:02:57.842770] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:47.372 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:47.372 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:21:47.372 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:47.372 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:47.372 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:47.372 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.372 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:47.372 02:02:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:47.631 [2024-11-19 02:02:58.244108] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:47.889 02:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:48.148 Malloc0 00:21:48.148 02:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:48.405 02:02:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:48.695 02:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:48.695 [2024-11-19 02:02:59.312059] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:48.964 02:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=96532 00:21:48.964 02:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:21:48.964 02:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 96532 /var/tmp/bdevperf.sock 00:21:48.964 02:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 96532 ']' 00:21:48.964 02:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:48.964 02:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:48.964 02:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:48.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:48.964 02:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:48.964 02:02:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:48.964 [2024-11-19 02:02:59.383918] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:21:48.964 [2024-11-19 02:02:59.384005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96532 ] 00:21:48.964 [2024-11-19 02:02:59.537265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.964 [2024-11-19 02:02:59.561330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:49.223 [2024-11-19 02:02:59.594122] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:49.788 02:03:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:49.788 02:03:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:21:49.788 02:03:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:50.045 02:03:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:21:50.303 NVMe0n1 00:21:50.303 02:03:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:50.303 02:03:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=96555 00:21:50.303 02:03:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:21:50.561 Running I/O for 10 seconds... 00:21:51.496 02:03:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:51.759 7701.00 IOPS, 30.08 MiB/s [2024-11-19T02:03:02.374Z] [2024-11-19 02:03:02.154208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.154896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-11-19 02:03:02.155274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with id:0 cdw10:00000000 cdw11:00000000 00:21:51.759 the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.759 [2024-11-19 02:03:02.155298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.759 [2024-11-19 02:03:02.155306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.759 [2024-11-19 02:03:02.155314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-11-19 02:03:02.155323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with id:0 cdw10:00000000 cdw11:00000000 00:21:51.759 the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with [2024-11-19 02:03:02.155332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:21:51.759 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.759 [2024-11-19 02:03:02.155340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.759 [2024-11-19 02:03:02.155348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.759 [2024-11-19 02:03:02.155356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd90160 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.759 [2024-11-19 02:03:02.155536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.155543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.155550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.155558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.155565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.155574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.155582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15a13e0 is same with the state(6) to be set 00:21:51.760 [2024-11-19 02:03:02.158624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.760 [2024-11-19 02:03:02.158642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.760 [2024-11-19 02:03:02.158661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.760 [2024-11-19 02:03:02.158670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.760 [2024-11-19 02:03:02.158682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:69936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.760 [2024-11-19 02:03:02.158690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.760 [2024-11-19 02:03:02.158701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.760 [2024-11-19 02:03:02.158710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.760 [2024-11-19 02:03:02.158720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:69952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.760 [2024-11-19 02:03:02.158743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.760 [2024-11-19 02:03:02.158753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:69960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.760 [2024-11-19 02:03:02.158762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.760 [2024-11-19 02:03:02.158772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:69968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.760 [2024-11-19 02:03:02.158780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.760 [2024-11-19 02:03:02.158791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:69976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.760 [2024-11-19 02:03:02.158800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.760 [2024-11-19 02:03:02.158810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:69984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.760 [2024-11-19 02:03:02.158818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.760 [2024-11-19 02:03:02.158828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:69992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.760 [2024-11-19 02:03:02.158836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.760 [2024-11-19 02:03:02.158862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.760 [2024-11-19 02:03:02.158871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.760 [2024-11-19 02:03:02.158881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.760 [2024-11-19 02:03:02.158890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.760 [2024-11-19 02:03:02.158900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:70016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.760 [2024-11-19 02:03:02.158909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.760 [2024-11-19 02:03:02.158919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:70024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.760 [2024-11-19 02:03:02.158929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.760 [2024-11-19 02:03:02.158939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:70032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.760 [2024-11-19 02:03:02.158948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.158958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:70040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.158967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.158978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.158987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.158997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:70056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:70064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:70072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:70080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:70088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:70096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:70104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:70112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:70120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:70128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:70136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:70152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:70160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:70184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:70192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:70200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:70216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:70232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:70240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:70248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:70264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:70272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:70280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:70288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:70296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:70312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:70336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:70344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.761 [2024-11-19 02:03:02.159723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.761 [2024-11-19 02:03:02.159734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:70352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.762 [2024-11-19 02:03:02.159742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.762 [2024-11-19 02:03:02.159753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:70360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.762 [2024-11-19 02:03:02.159762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.762 [2024-11-19 02:03:02.159772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.762 [2024-11-19 02:03:02.159781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.762 [2024-11-19 02:03:02.159791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:70376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.762 [2024-11-19 02:03:02.159799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.762 [2024-11-19 02:03:02.159817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:70384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.762 [2024-11-19 02:03:02.159826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.762 [2024-11-19 02:03:02.159837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:70392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.762 [2024-11-19 02:03:02.159845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.762 [2024-11-19 02:03:02.159856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:70400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.762 [2024-11-19 02:03:02.159864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.762 [2024-11-19 02:03:02.159875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:70408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.762 [2024-11-19 02:03:02.159884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.762 [2024-11-19 02:03:02.159894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:70416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.762 [2024-11-19 02:03:02.159903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.762 [2024-11-19 02:03:02.159913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:70424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.762 [2024-11-19 02:03:02.159922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.762 [2024-11-19 02:03:02.159932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:70432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.762 [2024-11-19 02:03:02.159945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.762 [2024-11-19 02:03:02.159955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:70440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.762 [2024-11-19 02:03:02.159964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.762 [2024-11-19 02:03:02.159974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:70448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.762 [2024-11-19 02:03:02.159983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.762 [2024-11-19 02:03:02.159993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:70456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.762 [2024-11-19 02:03:02.160002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.762 [2024-11-19 02:03:02.160012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:70464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.762 [2024-11-19 02:03:02.160021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.762 [2024-11-19 02:03:02.160031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.762 [2024-11-19 02:03:02.160054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.762 [2024-11-19 02:03:02.160064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:70480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.762 [2024-11-19 02:03:02.160073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.762 [2024-11-19 02:03:02.160083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:70488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.762 [2024-11-19 02:03:02.160091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.762 [2024-11-19 02:03:02.160102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:70496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.762 [2024-11-19 02:03:02.160110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.762 [2024-11-19 02:03:02.160120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:70504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.762 [2024-11-19 02:03:02.160128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.762 [2024-11-19 02:03:02.160141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:70512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.762 [2024-11-19 02:03:02.160150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.762 [2024-11-19 02:03:02.160160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.762 [2024-11-19 02:03:02.160168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.762 [2024-11-19 02:03:02.160178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:70528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.762 [2024-11-19 02:03:02.160186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.762 [2024-11-19 02:03:02.160196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.762 [2024-11-19 02:03:02.160204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.762 [2024-11-19 02:03:02.160214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:70544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.762 [2024-11-19 02:03:02.160223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.762 [2024-11-19 02:03:02.160233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:70552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.762 [2024-11-19 02:03:02.160242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.762 [2024-11-19 02:03:02.160252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:70560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.762 [2024-11-19 02:03:02.160262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.762 [2024-11-19 02:03:02.160272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:70568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.762 [2024-11-19 02:03:02.160296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.762 [2024-11-19 02:03:02.160306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:70576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.762 [2024-11-19 02:03:02.160315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.762 [2024-11-19 02:03:02.160325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.762 [2024-11-19 02:03:02.160334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.762 [2024-11-19 02:03:02.160344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:70592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.762 [2024-11-19 02:03:02.160353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.762 [2024-11-19 02:03:02.160363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.762 [2024-11-19 02:03:02.160372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.762 [2024-11-19 02:03:02.160382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.762 [2024-11-19 02:03:02.160391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.762 [2024-11-19 02:03:02.160401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:70616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.762 [2024-11-19 02:03:02.160409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.762 [2024-11-19 02:03:02.160420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:70624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.762 [2024-11-19 02:03:02.160428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.763 [2024-11-19 02:03:02.160439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:70632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.763 [2024-11-19 02:03:02.160447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.763 [2024-11-19 02:03:02.160460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:70640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.763 [2024-11-19 02:03:02.160468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.763 [2024-11-19 02:03:02.160479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:70648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.763 [2024-11-19 02:03:02.160488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.763 [2024-11-19 02:03:02.160498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.763 [2024-11-19 02:03:02.160507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.763 [2024-11-19 02:03:02.160517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:70664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.763 [2024-11-19 02:03:02.160526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.763 [2024-11-19 02:03:02.160536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:70672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.763 [2024-11-19 02:03:02.160554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.763 [2024-11-19 02:03:02.160565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:70680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.763 [2024-11-19 02:03:02.160574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.763 [2024-11-19 02:03:02.160585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.763 [2024-11-19 02:03:02.160609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.763 [2024-11-19 02:03:02.160619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:70696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.763 [2024-11-19 02:03:02.160628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.763 [2024-11-19 02:03:02.160638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.763 [2024-11-19 02:03:02.160646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.763 [2024-11-19 02:03:02.160656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:70712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.763 [2024-11-19 02:03:02.160664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.763 [2024-11-19 02:03:02.160674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.763 [2024-11-19 02:03:02.160682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.763 [2024-11-19 02:03:02.160692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:70728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.763 [2024-11-19 02:03:02.160700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.763 [2024-11-19 02:03:02.160710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:70736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.763 [2024-11-19 02:03:02.160719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.763 [2024-11-19 02:03:02.160729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.763 [2024-11-19 02:03:02.160737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.763 [2024-11-19 02:03:02.160747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:70752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.763 [2024-11-19 02:03:02.160755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.763 [2024-11-19 02:03:02.160765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.763 [2024-11-19 02:03:02.160773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.763 [2024-11-19 02:03:02.160785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:70768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.763 [2024-11-19 02:03:02.160793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.763 [2024-11-19 02:03:02.160803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:70776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.763 [2024-11-19 02:03:02.160812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.763 [2024-11-19 02:03:02.160822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.763 [2024-11-19 02:03:02.160830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.763 [2024-11-19 02:03:02.160840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:70792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.763 [2024-11-19 02:03:02.160848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.763 [2024-11-19 02:03:02.160858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.763 [2024-11-19 02:03:02.160866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.763 [2024-11-19 02:03:02.160876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.763 [2024-11-19 02:03:02.160885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.763 [2024-11-19 02:03:02.160896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.763 [2024-11-19 02:03:02.160906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.763 [2024-11-19 02:03:02.160916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.763 [2024-11-19 02:03:02.160939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.763 [2024-11-19 02:03:02.160950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.763 [2024-11-19 02:03:02.160958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.763 [2024-11-19 02:03:02.160969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.763 [2024-11-19 02:03:02.160977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.763 [2024-11-19 02:03:02.160988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.763 [2024-11-19 02:03:02.160996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.763 [2024-11-19 02:03:02.161006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.763 [2024-11-19 02:03:02.161015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.763 [2024-11-19 02:03:02.161025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.763 [2024-11-19 02:03:02.161033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.763 [2024-11-19 02:03:02.161044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.763 [2024-11-19 02:03:02.161052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.763 [2024-11-19 02:03:02.161062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.763 [2024-11-19 02:03:02.161071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.763 [2024-11-19 02:03:02.161081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.764 [2024-11-19 02:03:02.161090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.764 [2024-11-19 02:03:02.161102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.764 [2024-11-19 02:03:02.161111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.764 [2024-11-19 02:03:02.161121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.764 [2024-11-19 02:03:02.161130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.764 [2024-11-19 02:03:02.161141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.764 [2024-11-19 02:03:02.161149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.764 [2024-11-19 02:03:02.161160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:51.764 [2024-11-19 02:03:02.161168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.764 [2024-11-19 02:03:02.161179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:70808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:51.764 [2024-11-19 02:03:02.161187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.764 [2024-11-19 02:03:02.161196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb1bf0 is same with the state(6) to be set 00:21:51.764 [2024-11-19 02:03:02.161207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:51.764 [2024-11-19 02:03:02.161214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:51.764 [2024-11-19 02:03:02.161223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70816 len:8 PRP1 0x0 PRP2 0x0 00:21:51.764 [2024-11-19 02:03:02.161232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.764 [2024-11-19 02:03:02.161533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:51.764 [2024-11-19 02:03:02.161577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd90160 (9): Bad file descriptor 00:21:51.764 [2024-11-19 02:03:02.161671] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.764 [2024-11-19 02:03:02.161700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd90160 with addr=10.0.0.3, port=4420 00:21:51.764 [2024-11-19 02:03:02.161711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd90160 is same with the state(6) to be set 00:21:51.764 [2024-11-19 02:03:02.161729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd90160 (9): Bad file descriptor 00:21:51.764 [2024-11-19 02:03:02.161744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:51.764 [2024-11-19 02:03:02.161753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:51.764 [2024-11-19 02:03:02.161763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:51.764 [2024-11-19 02:03:02.161773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:51.764 [2024-11-19 02:03:02.161782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:51.764 02:03:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:21:53.636 4370.00 IOPS, 17.07 MiB/s [2024-11-19T02:03:04.251Z] 2913.33 IOPS, 11.38 MiB/s [2024-11-19T02:03:04.251Z] [2024-11-19 02:03:04.162005] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:53.636 [2024-11-19 02:03:04.162084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd90160 with addr=10.0.0.3, port=4420 00:21:53.636 [2024-11-19 02:03:04.162100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd90160 is same with the state(6) to be set 00:21:53.636 [2024-11-19 02:03:04.162123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd90160 (9): Bad file descriptor 00:21:53.636 [2024-11-19 02:03:04.162151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:53.636 [2024-11-19 02:03:04.162163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:53.636 [2024-11-19 02:03:04.162174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:53.636 [2024-11-19 02:03:04.162185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:53.636 [2024-11-19 02:03:04.162196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:53.636 02:03:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:21:53.636 02:03:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:53.636 02:03:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:21:53.907 02:03:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:21:53.907 02:03:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:21:53.907 02:03:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:21:53.907 02:03:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:21:54.221 02:03:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:21:54.221 02:03:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:21:55.423 2185.00 IOPS, 8.54 MiB/s [2024-11-19T02:03:06.297Z] 1748.00 IOPS, 6.83 MiB/s [2024-11-19T02:03:06.297Z] [2024-11-19 02:03:06.162392] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.682 [2024-11-19 02:03:06.162467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd90160 with addr=10.0.0.3, port=4420 00:21:55.682 [2024-11-19 02:03:06.162482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd90160 is same with the state(6) to be set 00:21:55.682 [2024-11-19 02:03:06.162503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd90160 (9): Bad file descriptor 00:21:55.682 [2024-11-19 02:03:06.162531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:55.682 [2024-11-19 02:03:06.162542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:55.682 [2024-11-19 02:03:06.162552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:55.682 [2024-11-19 02:03:06.162562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:55.682 [2024-11-19 02:03:06.162572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:57.561 1456.67 IOPS, 5.69 MiB/s [2024-11-19T02:03:08.176Z] 1248.57 IOPS, 4.88 MiB/s [2024-11-19T02:03:08.176Z] [2024-11-19 02:03:08.162675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:57.561 [2024-11-19 02:03:08.162727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:57.561 [2024-11-19 02:03:08.162753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:57.561 [2024-11-19 02:03:08.162761] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:21:57.561 [2024-11-19 02:03:08.162772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:58.760 1092.50 IOPS, 4.27 MiB/s 00:21:58.760 Latency(us) 00:21:58.760 [2024-11-19T02:03:09.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.760 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:58.760 Verification LBA range: start 0x0 length 0x4000 00:21:58.760 NVMe0n1 : 8.16 1071.59 4.19 15.69 0.00 117698.43 3455.53 7046430.72 00:21:58.760 [2024-11-19T02:03:09.375Z] =================================================================================================================== 00:21:58.760 [2024-11-19T02:03:09.375Z] Total : 1071.59 4.19 15.69 0.00 117698.43 3455.53 7046430.72 00:21:58.760 { 00:21:58.760 "results": [ 00:21:58.760 { 00:21:58.760 "job": "NVMe0n1", 00:21:58.760 "core_mask": "0x4", 00:21:58.760 "workload": "verify", 00:21:58.760 "status": "finished", 00:21:58.760 "verify_range": { 00:21:58.760 "start": 0, 00:21:58.760 "length": 16384 00:21:58.760 }, 00:21:58.760 "queue_depth": 128, 00:21:58.760 "io_size": 4096, 00:21:58.760 "runtime": 8.156067, 00:21:58.760 "iops": 1071.5949243673451, 00:21:58.760 "mibps": 4.185917673309942, 00:21:58.760 "io_failed": 128, 00:21:58.760 "io_timeout": 0, 00:21:58.760 "avg_latency_us": 117698.43151187108, 00:21:58.760 "min_latency_us": 3455.5345454545454, 00:21:58.760 "max_latency_us": 7046430.72 00:21:58.760 } 00:21:58.760 ], 00:21:58.760 "core_count": 1 00:21:58.760 } 00:21:59.329 02:03:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:21:59.329 02:03:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:59.329 02:03:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:21:59.588 02:03:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:21:59.588 02:03:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:21:59.588 02:03:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:21:59.588 02:03:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:21:59.588 02:03:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:21:59.588 02:03:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 96555 00:21:59.588 02:03:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 96532 00:21:59.588 02:03:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 96532 ']' 00:21:59.588 02:03:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 96532 00:21:59.588 02:03:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:21:59.588 02:03:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:59.588 02:03:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96532 00:21:59.847 02:03:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:59.847 02:03:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:59.847 killing process with pid 96532 00:21:59.847 02:03:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96532' 00:21:59.847 02:03:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 96532 00:21:59.847 Received shutdown signal, test time was about 9.221589 seconds 00:21:59.847 00:21:59.847 Latency(us) 00:21:59.847 [2024-11-19T02:03:10.462Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.847 [2024-11-19T02:03:10.462Z] =================================================================================================================== 00:21:59.847 [2024-11-19T02:03:10.463Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:59.848 02:03:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 96532 00:21:59.848 02:03:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:00.106 [2024-11-19 02:03:10.616371] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:00.106 02:03:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=96673 00:22:00.106 02:03:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:00.106 02:03:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 96673 /var/tmp/bdevperf.sock 00:22:00.106 02:03:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 96673 ']' 00:22:00.106 02:03:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:00.106 02:03:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:00.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:00.106 02:03:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:00.106 02:03:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:00.106 02:03:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:00.106 [2024-11-19 02:03:10.677901] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:22:00.106 [2024-11-19 02:03:10.678030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96673 ] 00:22:00.366 [2024-11-19 02:03:10.819129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.366 [2024-11-19 02:03:10.837655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:00.366 [2024-11-19 02:03:10.864128] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:00.366 02:03:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:00.366 02:03:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:22:00.366 02:03:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:00.625 02:03:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:22:00.884 NVMe0n1 00:22:00.884 02:03:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=96689 00:22:00.884 02:03:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:00.884 02:03:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:22:01.143 Running I/O for 10 seconds... 00:22:02.079 02:03:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:02.341 7973.00 IOPS, 31.14 MiB/s [2024-11-19T02:03:12.956Z] [2024-11-19 02:03:12.758733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.341 [2024-11-19 02:03:12.758792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.341 [2024-11-19 02:03:12.758830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.341 [2024-11-19 02:03:12.758840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.341 [2024-11-19 02:03:12.758850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.341 [2024-11-19 02:03:12.758859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.341 [2024-11-19 02:03:12.758869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.341 [2024-11-19 02:03:12.758878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.341 [2024-11-19 02:03:12.758888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.341 [2024-11-19 02:03:12.758896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.341 [2024-11-19 02:03:12.758906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.341 [2024-11-19 02:03:12.758929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.341 [2024-11-19 02:03:12.758939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.341 [2024-11-19 02:03:12.758948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.341 [2024-11-19 02:03:12.758957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.341 [2024-11-19 02:03:12.758965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.342 [2024-11-19 02:03:12.758974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.342 [2024-11-19 02:03:12.758982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.342 [2024-11-19 02:03:12.758991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.342 [2024-11-19 02:03:12.758999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.342 [2024-11-19 02:03:12.759009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.342 [2024-11-19 02:03:12.759017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.342 [2024-11-19 02:03:12.759026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.342 [2024-11-19 02:03:12.759034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.342 [2024-11-19 02:03:12.759060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.342 [2024-11-19 02:03:12.759083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.342 [2024-11-19 02:03:12.759093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.342 [2024-11-19 02:03:12.759101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.342 [2024-11-19 02:03:12.759111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.342 [2024-11-19 02:03:12.759120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.342 [2024-11-19 02:03:12.759130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.342 [2024-11-19 02:03:12.759138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.342 [2024-11-19 02:03:12.759148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.342 [2024-11-19 02:03:12.759156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.342 [2024-11-19 02:03:12.759169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.342 [2024-11-19 02:03:12.759178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.342 [2024-11-19 02:03:12.759188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.342 [2024-11-19 02:03:12.759196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.342 [2024-11-19 02:03:12.759206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.342 [2024-11-19 02:03:12.759214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.342 [2024-11-19 02:03:12.759224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.342 [2024-11-19 02:03:12.759232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.342 [2024-11-19 02:03:12.759242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.342 [2024-11-19 02:03:12.759251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.342 [2024-11-19 02:03:12.759260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.342 [2024-11-19 02:03:12.759269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.342 [2024-11-19 02:03:12.759279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.342 [2024-11-19 02:03:12.759287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.342 [2024-11-19 02:03:12.759299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.342 [2024-11-19 02:03:12.759307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.342 [2024-11-19 02:03:12.759317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.342 [2024-11-19 02:03:12.759326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.342 [2024-11-19 02:03:12.759336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.342 [2024-11-19 02:03:12.759344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.342 [2024-11-19 02:03:12.759355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.342 [2024-11-19 02:03:12.759363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.342 [2024-11-19 02:03:12.759373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.342 [2024-11-19 02:03:12.759381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.342 [2024-11-19 02:03:12.759391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.342 [2024-11-19 02:03:12.759400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.342 [2024-11-19 02:03:12.759410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.342 [2024-11-19 02:03:12.759418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.342 [2024-11-19 02:03:12.759428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.342 [2024-11-19 02:03:12.759436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.342 [2024-11-19 02:03:12.759446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.342 [2024-11-19 02:03:12.759454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.342 [2024-11-19 02:03:12.759465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.342 [2024-11-19 02:03:12.759474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.342 [2024-11-19 02:03:12.759484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.342 [2024-11-19 02:03:12.759492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.342 [2024-11-19 02:03:12.759502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.342 [2024-11-19 02:03:12.759532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.342 [2024-11-19 02:03:12.759542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.342 [2024-11-19 02:03:12.759550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.342 [2024-11-19 02:03:12.759573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.342 [2024-11-19 02:03:12.759583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.343 [2024-11-19 02:03:12.759593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.343 [2024-11-19 02:03:12.759602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.343 [2024-11-19 02:03:12.759612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.343 [2024-11-19 02:03:12.759620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.343 [2024-11-19 02:03:12.759630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.343 [2024-11-19 02:03:12.759639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.343 [2024-11-19 02:03:12.759649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.343 [2024-11-19 02:03:12.759657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.343 [2024-11-19 02:03:12.759667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.343 [2024-11-19 02:03:12.759675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.343 [2024-11-19 02:03:12.759686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.343 [2024-11-19 02:03:12.759694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.343 [2024-11-19 02:03:12.759704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.343 [2024-11-19 02:03:12.759712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.343 [2024-11-19 02:03:12.759722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.343 [2024-11-19 02:03:12.759730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.343 [2024-11-19 02:03:12.759742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.343 [2024-11-19 02:03:12.759750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.343 [2024-11-19 02:03:12.759760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.343 [2024-11-19 02:03:12.759768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.343 [2024-11-19 02:03:12.759778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.343 [2024-11-19 02:03:12.759786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.343 [2024-11-19 02:03:12.759798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.343 [2024-11-19 02:03:12.759806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.343 [2024-11-19 02:03:12.759816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:73992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.343 [2024-11-19 02:03:12.759824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.343 [2024-11-19 02:03:12.759834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.343 [2024-11-19 02:03:12.759842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.343 [2024-11-19 02:03:12.759852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:74008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.343 [2024-11-19 02:03:12.759861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.343 [2024-11-19 02:03:12.759871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.343 [2024-11-19 02:03:12.759879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.343 [2024-11-19 02:03:12.759889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.343 [2024-11-19 02:03:12.759897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.343 [2024-11-19 02:03:12.759907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.343 [2024-11-19 02:03:12.759915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.343 [2024-11-19 02:03:12.759925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:74024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.343 [2024-11-19 02:03:12.759933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.343 [2024-11-19 02:03:12.759943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.343 [2024-11-19 02:03:12.759951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.343 [2024-11-19 02:03:12.759961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.343 [2024-11-19 02:03:12.759970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.343 [2024-11-19 02:03:12.759980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.343 [2024-11-19 02:03:12.759988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.343 [2024-11-19 02:03:12.759998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.343 [2024-11-19 02:03:12.760006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.343 [2024-11-19 02:03:12.760016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:74064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.343 [2024-11-19 02:03:12.760024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.343 [2024-11-19 02:03:12.760034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.343 [2024-11-19 02:03:12.760042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.343 [2024-11-19 02:03:12.760052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:02.343 [2024-11-19 02:03:12.760060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.343 [2024-11-19 02:03:12.760070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:74080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.343 [2024-11-19 02:03:12.760079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.343 [2024-11-19 02:03:12.760089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.343 [2024-11-19 02:03:12.760098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.343 [2024-11-19 02:03:12.760108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.343 [2024-11-19 02:03:12.760116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.343 [2024-11-19 02:03:12.760126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.343 [2024-11-19 02:03:12.760134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.343 [2024-11-19 02:03:12.760144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:74112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.343 [2024-11-19 02:03:12.760152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.343 [2024-11-19 02:03:12.760163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.343 [2024-11-19 02:03:12.760171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.343 [2024-11-19 02:03:12.760181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.343 [2024-11-19 02:03:12.760189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.344 [2024-11-19 02:03:12.760199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.344 [2024-11-19 02:03:12.760207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.344 [2024-11-19 02:03:12.760218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.344 [2024-11-19 02:03:12.760226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.344 [2024-11-19 02:03:12.760236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.344 [2024-11-19 02:03:12.760244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.344 [2024-11-19 02:03:12.760254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:74160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.344 [2024-11-19 02:03:12.760262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.344 [2024-11-19 02:03:12.760272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:74168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.344 [2024-11-19 02:03:12.760280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.344 [2024-11-19 02:03:12.760290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.344 [2024-11-19 02:03:12.760298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.344 [2024-11-19 02:03:12.760308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.344 [2024-11-19 02:03:12.760316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.344 [2024-11-19 02:03:12.760326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.344 [2024-11-19 02:03:12.760334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.344 [2024-11-19 02:03:12.760344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.344 [2024-11-19 02:03:12.760352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.344 [2024-11-19 02:03:12.760362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.344 [2024-11-19 02:03:12.760371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.344 [2024-11-19 02:03:12.760382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.344 [2024-11-19 02:03:12.760391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.344 [2024-11-19 02:03:12.760401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.344 [2024-11-19 02:03:12.760410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.344 [2024-11-19 02:03:12.760420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:74232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.344 [2024-11-19 02:03:12.760428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.344 [2024-11-19 02:03:12.760438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:74240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.344 [2024-11-19 02:03:12.760446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.344 [2024-11-19 02:03:12.760457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.344 [2024-11-19 02:03:12.760465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.344 [2024-11-19 02:03:12.760475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:74256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.344 [2024-11-19 02:03:12.760483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.344 [2024-11-19 02:03:12.760493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:74264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.344 [2024-11-19 02:03:12.760527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.344 [2024-11-19 02:03:12.760539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:74272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.344 [2024-11-19 02:03:12.760547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.344 [2024-11-19 02:03:12.760558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:74280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.344 [2024-11-19 02:03:12.760566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.344 [2024-11-19 02:03:12.760577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.344 [2024-11-19 02:03:12.760586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.344 [2024-11-19 02:03:12.760596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.344 [2024-11-19 02:03:12.760605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.344 [2024-11-19 02:03:12.760615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:74304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.344 [2024-11-19 02:03:12.760623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.344 [2024-11-19 02:03:12.760634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:74312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.344 [2024-11-19 02:03:12.760642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.344 [2024-11-19 02:03:12.760653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:74320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.344 [2024-11-19 02:03:12.760661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.344 [2024-11-19 02:03:12.760672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.344 [2024-11-19 02:03:12.760680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.344 [2024-11-19 02:03:12.760691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:74336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.344 [2024-11-19 02:03:12.760700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.344 [2024-11-19 02:03:12.760716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:74344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.344 [2024-11-19 02:03:12.760725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.344 [2024-11-19 02:03:12.760736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:74352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.344 [2024-11-19 02:03:12.760744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.344 [2024-11-19 02:03:12.760755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.344 [2024-11-19 02:03:12.760764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.344 [2024-11-19 02:03:12.760774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:74368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.344 [2024-11-19 02:03:12.760783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.344 [2024-11-19 02:03:12.760793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:74376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.344 [2024-11-19 02:03:12.760802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.345 [2024-11-19 02:03:12.760812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:74384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.345 [2024-11-19 02:03:12.760820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.345 [2024-11-19 02:03:12.760831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:74392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.345 [2024-11-19 02:03:12.760855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.345 [2024-11-19 02:03:12.760866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:74400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.345 [2024-11-19 02:03:12.760874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.345 [2024-11-19 02:03:12.760884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:74408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.345 [2024-11-19 02:03:12.760892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.345 [2024-11-19 02:03:12.760903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:74416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.345 [2024-11-19 02:03:12.760911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.345 [2024-11-19 02:03:12.760921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.345 [2024-11-19 02:03:12.760929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.345 [2024-11-19 02:03:12.760940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:74432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.345 [2024-11-19 02:03:12.760948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.345 [2024-11-19 02:03:12.760959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.345 [2024-11-19 02:03:12.760967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.345 [2024-11-19 02:03:12.760977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:74448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.345 [2024-11-19 02:03:12.760986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.345 [2024-11-19 02:03:12.760996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:74456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.345 [2024-11-19 02:03:12.761004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.345 [2024-11-19 02:03:12.761014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:74464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.345 [2024-11-19 02:03:12.761023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.345 [2024-11-19 02:03:12.761035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:74472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.345 [2024-11-19 02:03:12.761044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.345 [2024-11-19 02:03:12.761054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:74480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.345 [2024-11-19 02:03:12.761063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.345 [2024-11-19 02:03:12.761090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.345 [2024-11-19 02:03:12.761099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.345 [2024-11-19 02:03:12.761109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.345 [2024-11-19 02:03:12.761117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.345 [2024-11-19 02:03:12.761128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.345 [2024-11-19 02:03:12.761137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.345 [2024-11-19 02:03:12.761147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:74512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.345 [2024-11-19 02:03:12.761156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.345 [2024-11-19 02:03:12.761166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:74520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.345 [2024-11-19 02:03:12.761194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.345 [2024-11-19 02:03:12.761205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.345 [2024-11-19 02:03:12.761214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.345 [2024-11-19 02:03:12.761224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:74536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.345 [2024-11-19 02:03:12.761250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.345 [2024-11-19 02:03:12.761260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:74544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.345 [2024-11-19 02:03:12.761269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.345 [2024-11-19 02:03:12.761279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:74552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.345 [2024-11-19 02:03:12.761289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.345 [2024-11-19 02:03:12.761299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:74560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.345 [2024-11-19 02:03:12.761308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.345 [2024-11-19 02:03:12.761318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:74568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.345 [2024-11-19 02:03:12.761327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.345 [2024-11-19 02:03:12.761337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:74576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.345 [2024-11-19 02:03:12.761346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.345 [2024-11-19 02:03:12.761356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54ee70 is same with the state(6) to be set 00:22:02.345 [2024-11-19 02:03:12.761367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:02.345 [2024-11-19 02:03:12.761375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:02.345 [2024-11-19 02:03:12.761382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74584 len:8 PRP1 0x0 PRP2 0x0 00:22:02.345 [2024-11-19 02:03:12.761392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.345 [2024-11-19 02:03:12.761520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.345 [2024-11-19 02:03:12.761551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.345 [2024-11-19 02:03:12.761578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.345 [2024-11-19 02:03:12.761587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.345 [2024-11-19 02:03:12.761596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.345 [2024-11-19 02:03:12.761605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.345 [2024-11-19 02:03:12.761614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.345 [2024-11-19 02:03:12.761634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.345 [2024-11-19 02:03:12.761644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d3e0 is same with the state(6) to be set 00:22:02.345 [2024-11-19 02:03:12.761864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:02.345 [2024-11-19 02:03:12.761896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x52d3e0 (9): Bad file descriptor 00:22:02.345 [2024-11-19 02:03:12.762034] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:02.346 [2024-11-19 02:03:12.762066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x52d3e0 with addr=10.0.0.3, port=4420 00:22:02.346 [2024-11-19 02:03:12.762077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d3e0 is same with the state(6) to be set 00:22:02.346 [2024-11-19 02:03:12.762096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x52d3e0 (9): Bad file descriptor 00:22:02.346 [2024-11-19 02:03:12.762113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:22:02.346 [2024-11-19 02:03:12.762122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:22:02.346 [2024-11-19 02:03:12.762132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:02.346 [2024-11-19 02:03:12.762143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:22:02.346 [2024-11-19 02:03:12.762153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:02.346 02:03:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:22:03.283 4619.00 IOPS, 18.04 MiB/s [2024-11-19T02:03:13.898Z] [2024-11-19 02:03:13.762245] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:03.283 [2024-11-19 02:03:13.762318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x52d3e0 with addr=10.0.0.3, port=4420 00:22:03.283 [2024-11-19 02:03:13.762345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d3e0 is same with the state(6) to be set 00:22:03.283 [2024-11-19 02:03:13.762365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x52d3e0 (9): Bad file descriptor 00:22:03.283 [2024-11-19 02:03:13.762382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:22:03.283 [2024-11-19 02:03:13.762391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:22:03.283 [2024-11-19 02:03:13.762402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:03.283 [2024-11-19 02:03:13.762411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:22:03.283 [2024-11-19 02:03:13.762421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:03.283 02:03:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:03.542 [2024-11-19 02:03:14.030433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:03.542 02:03:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 96689 00:22:04.369 3079.33 IOPS, 12.03 MiB/s [2024-11-19T02:03:14.984Z] [2024-11-19 02:03:14.773294] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:06.243 2309.50 IOPS, 9.02 MiB/s [2024-11-19T02:03:17.796Z] 3644.80 IOPS, 14.24 MiB/s [2024-11-19T02:03:18.733Z] 4849.33 IOPS, 18.94 MiB/s [2024-11-19T02:03:19.670Z] 5729.14 IOPS, 22.38 MiB/s [2024-11-19T02:03:20.609Z] 6370.88 IOPS, 24.89 MiB/s [2024-11-19T02:03:21.987Z] 6860.44 IOPS, 26.80 MiB/s [2024-11-19T02:03:21.987Z] 7259.90 IOPS, 28.36 MiB/s 00:22:11.372 Latency(us) 00:22:11.372 [2024-11-19T02:03:21.987Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.372 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:11.372 Verification LBA range: start 0x0 length 0x4000 00:22:11.372 NVMe0n1 : 10.01 7264.69 28.38 0.00 0.00 17583.81 1266.04 3019898.88 00:22:11.372 [2024-11-19T02:03:21.987Z] =================================================================================================================== 00:22:11.372 [2024-11-19T02:03:21.987Z] Total : 7264.69 28.38 0.00 0.00 17583.81 1266.04 3019898.88 00:22:11.372 { 00:22:11.372 "results": [ 00:22:11.372 { 00:22:11.372 "job": "NVMe0n1", 00:22:11.372 "core_mask": "0x4", 00:22:11.372 "workload": "verify", 00:22:11.372 "status": "finished", 00:22:11.372 "verify_range": { 00:22:11.372 "start": 0, 00:22:11.372 "length": 16384 00:22:11.372 }, 00:22:11.372 "queue_depth": 128, 00:22:11.372 "io_size": 4096, 00:22:11.372 "runtime": 10.00676, 00:22:11.372 "iops": 7264.689070188552, 00:22:11.372 "mibps": 28.377691680424032, 00:22:11.372 "io_failed": 0, 00:22:11.372 "io_timeout": 0, 00:22:11.372 "avg_latency_us": 17583.811099572817, 00:22:11.372 "min_latency_us": 1266.0363636363636, 00:22:11.372 "max_latency_us": 3019898.88 00:22:11.372 } 00:22:11.372 ], 00:22:11.372 "core_count": 1 00:22:11.372 } 00:22:11.372 02:03:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=96795 00:22:11.372 02:03:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:11.372 02:03:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:22:11.372 Running I/O for 10 seconds... 00:22:12.311 02:03:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:12.311 7957.00 IOPS, 31.08 MiB/s [2024-11-19T02:03:22.926Z] [2024-11-19 02:03:22.894100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.311 [2024-11-19 02:03:22.894160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.311 [2024-11-19 02:03:22.894196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.311 [2024-11-19 02:03:22.894207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.894217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.894227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.894237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.894246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.894257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.894265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.894275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.894283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.894305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.894314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.894323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.894332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.894341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.894350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.894359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.894381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.894391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.894399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.894408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.894432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.894457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.894465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.894475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.894484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.894494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.894502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.894523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.894532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.894542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.894552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.894565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.894574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.894607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.894619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.894629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.894638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.894648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.894657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.894667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.894676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.894686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.894695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.894705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.894713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.894723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.894732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.894742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.894751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.894761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.894770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.894780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.894788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.894799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.894807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.894817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.894826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.894836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.894844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.894861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.894870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.894880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.894903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.894914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.894923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.894933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.894941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.894951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.894959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.894969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.894977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.894987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.894995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.895005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.895013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.895023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.895031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.312 [2024-11-19 02:03:22.895041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.312 [2024-11-19 02:03:22.895050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.313 [2024-11-19 02:03:22.895068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.313 [2024-11-19 02:03:22.895086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.313 [2024-11-19 02:03:22.895104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.313 [2024-11-19 02:03:22.895122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.313 [2024-11-19 02:03:22.895140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.313 [2024-11-19 02:03:22.895159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.313 [2024-11-19 02:03:22.895177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.313 [2024-11-19 02:03:22.895195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.313 [2024-11-19 02:03:22.895215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.313 [2024-11-19 02:03:22.895233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.313 [2024-11-19 02:03:22.895251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.313 [2024-11-19 02:03:22.895269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.313 [2024-11-19 02:03:22.895287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.313 [2024-11-19 02:03:22.895305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.313 [2024-11-19 02:03:22.895323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.313 [2024-11-19 02:03:22.895342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.313 [2024-11-19 02:03:22.895360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.313 [2024-11-19 02:03:22.895378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.313 [2024-11-19 02:03:22.895396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.313 [2024-11-19 02:03:22.895414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.313 [2024-11-19 02:03:22.895432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.313 [2024-11-19 02:03:22.895450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.313 [2024-11-19 02:03:22.895469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.313 [2024-11-19 02:03:22.895487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.313 [2024-11-19 02:03:22.895506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.313 [2024-11-19 02:03:22.895553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.313 [2024-11-19 02:03:22.895573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.313 [2024-11-19 02:03:22.895592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.313 [2024-11-19 02:03:22.895610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.313 [2024-11-19 02:03:22.895631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:74920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.313 [2024-11-19 02:03:22.895650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:74928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.313 [2024-11-19 02:03:22.895669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:74936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.313 [2024-11-19 02:03:22.895687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:74944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.313 [2024-11-19 02:03:22.895706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.313 [2024-11-19 02:03:22.895725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.313 [2024-11-19 02:03:22.895744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:74968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.313 [2024-11-19 02:03:22.895763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:74976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.313 [2024-11-19 02:03:22.895782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.313 [2024-11-19 02:03:22.895801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.313 [2024-11-19 02:03:22.895811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:74992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.314 [2024-11-19 02:03:22.895820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.895830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:75000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.314 [2024-11-19 02:03:22.895840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.895850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:75008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.314 [2024-11-19 02:03:22.895859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.895869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:75016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.314 [2024-11-19 02:03:22.895878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.895889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.314 [2024-11-19 02:03:22.895897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.895922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:75032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.314 [2024-11-19 02:03:22.895931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.895940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.314 [2024-11-19 02:03:22.895949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.895959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.314 [2024-11-19 02:03:22.895967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.895977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:75040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.314 [2024-11-19 02:03:22.895985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.895995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.314 [2024-11-19 02:03:22.896004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.896013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:75056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.314 [2024-11-19 02:03:22.896022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.896032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.314 [2024-11-19 02:03:22.896040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.896051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.314 [2024-11-19 02:03:22.896059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.896069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.314 [2024-11-19 02:03:22.896077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.896087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.314 [2024-11-19 02:03:22.896096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.896106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.314 [2024-11-19 02:03:22.896114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.896124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.314 [2024-11-19 02:03:22.896132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.896148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:75104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.314 [2024-11-19 02:03:22.896161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.896172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:75112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.314 [2024-11-19 02:03:22.896180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.896190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.314 [2024-11-19 02:03:22.896199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.896209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.314 [2024-11-19 02:03:22.896217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.896227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.314 [2024-11-19 02:03:22.896235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.896245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.314 [2024-11-19 02:03:22.896253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.896263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.314 [2024-11-19 02:03:22.896271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.896282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.314 [2024-11-19 02:03:22.896290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.896300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.314 [2024-11-19 02:03:22.896308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.896318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:75176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.314 [2024-11-19 02:03:22.896326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.896336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.314 [2024-11-19 02:03:22.896344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.896354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:75192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.314 [2024-11-19 02:03:22.896363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.896372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.314 [2024-11-19 02:03:22.896381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.896391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.314 [2024-11-19 02:03:22.896399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.896409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.314 [2024-11-19 02:03:22.896417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.896427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.314 [2024-11-19 02:03:22.896435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.896447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.314 [2024-11-19 02:03:22.896457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.896468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.314 [2024-11-19 02:03:22.896476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.896486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.314 [2024-11-19 02:03:22.896495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.896505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.314 [2024-11-19 02:03:22.896524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.896536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.314 [2024-11-19 02:03:22.896561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.314 [2024-11-19 02:03:22.896571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.314 [2024-11-19 02:03:22.896580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.315 [2024-11-19 02:03:22.896590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.315 [2024-11-19 02:03:22.896598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.315 [2024-11-19 02:03:22.896609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:75288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.315 [2024-11-19 02:03:22.896617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.315 [2024-11-19 02:03:22.896628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.315 [2024-11-19 02:03:22.896636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.315 [2024-11-19 02:03:22.896647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.315 [2024-11-19 02:03:22.896655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.315 [2024-11-19 02:03:22.896665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.315 [2024-11-19 02:03:22.896674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.315 [2024-11-19 02:03:22.896685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:75320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.315 [2024-11-19 02:03:22.896693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.315 [2024-11-19 02:03:22.896703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.315 [2024-11-19 02:03:22.896712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.315 [2024-11-19 02:03:22.896722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:12.315 [2024-11-19 02:03:22.896730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.315 [2024-11-19 02:03:22.896740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x54ceb0 is same with the state(6) to be set 00:22:12.315 [2024-11-19 02:03:22.896752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:12.315 [2024-11-19 02:03:22.896759] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:12.315 [2024-11-19 02:03:22.896767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75344 len:8 PRP1 0x0 PRP2 0x0 00:22:12.315 [2024-11-19 02:03:22.896777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:12.315 [2024-11-19 02:03:22.897048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:12.315 [2024-11-19 02:03:22.897118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x52d3e0 (9): Bad file descriptor 00:22:12.315 [2024-11-19 02:03:22.897209] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:12.315 [2024-11-19 02:03:22.897228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x52d3e0 with addr=10.0.0.3, port=4420 00:22:12.315 [2024-11-19 02:03:22.897238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d3e0 is same with the state(6) to be set 00:22:12.315 [2024-11-19 02:03:22.897253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x52d3e0 (9): Bad file descriptor 00:22:12.315 [2024-11-19 02:03:22.897268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:12.315 [2024-11-19 02:03:22.897278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:12.315 [2024-11-19 02:03:22.897287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:12.315 [2024-11-19 02:03:22.897297] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:12.315 [2024-11-19 02:03:22.897307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:12.574 02:03:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:22:13.401 4682.50 IOPS, 18.29 MiB/s [2024-11-19T02:03:24.016Z] [2024-11-19 02:03:23.897403] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:13.401 [2024-11-19 02:03:23.897477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x52d3e0 with addr=10.0.0.3, port=4420 00:22:13.401 [2024-11-19 02:03:23.897490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d3e0 is same with the state(6) to be set 00:22:13.401 [2024-11-19 02:03:23.897525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x52d3e0 (9): Bad file descriptor 00:22:13.401 [2024-11-19 02:03:23.897555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:13.401 [2024-11-19 02:03:23.897565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:13.401 [2024-11-19 02:03:23.897574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:13.401 [2024-11-19 02:03:23.897584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:13.401 [2024-11-19 02:03:23.897594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:14.339 3121.67 IOPS, 12.19 MiB/s [2024-11-19T02:03:24.954Z] [2024-11-19 02:03:24.897683] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:14.339 [2024-11-19 02:03:24.897755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x52d3e0 with addr=10.0.0.3, port=4420 00:22:14.339 [2024-11-19 02:03:24.897768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d3e0 is same with the state(6) to be set 00:22:14.339 [2024-11-19 02:03:24.897788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x52d3e0 (9): Bad file descriptor 00:22:14.339 [2024-11-19 02:03:24.897805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:14.339 [2024-11-19 02:03:24.897814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:14.339 [2024-11-19 02:03:24.897824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:14.339 [2024-11-19 02:03:24.897834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:14.339 [2024-11-19 02:03:24.897844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:15.536 2341.25 IOPS, 9.15 MiB/s [2024-11-19T02:03:26.151Z] [2024-11-19 02:03:25.901495] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.536 [2024-11-19 02:03:25.901576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x52d3e0 with addr=10.0.0.3, port=4420 00:22:15.536 [2024-11-19 02:03:25.901590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d3e0 is same with the state(6) to be set 00:22:15.536 [2024-11-19 02:03:25.901864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x52d3e0 (9): Bad file descriptor 00:22:15.536 [2024-11-19 02:03:25.902165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:15.536 [2024-11-19 02:03:25.902188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:15.536 [2024-11-19 02:03:25.902199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:15.536 [2024-11-19 02:03:25.902210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:15.536 [2024-11-19 02:03:25.902221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:15.536 02:03:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:15.797 [2024-11-19 02:03:26.211196] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:15.797 02:03:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 96795 00:22:16.367 1873.00 IOPS, 7.32 MiB/s [2024-11-19T02:03:26.982Z] [2024-11-19 02:03:26.926658] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:22:18.243 2971.33 IOPS, 11.61 MiB/s [2024-11-19T02:03:29.794Z] 4094.29 IOPS, 15.99 MiB/s [2024-11-19T02:03:30.730Z] 4937.00 IOPS, 19.29 MiB/s [2024-11-19T02:03:31.801Z] 5597.33 IOPS, 21.86 MiB/s [2024-11-19T02:03:31.802Z] 6122.00 IOPS, 23.91 MiB/s 00:22:21.187 Latency(us) 00:22:21.187 [2024-11-19T02:03:31.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.187 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:21.187 Verification LBA range: start 0x0 length 0x4000 00:22:21.187 NVMe0n1 : 10.01 6129.58 23.94 4137.62 0.00 12443.34 614.40 3019898.88 00:22:21.187 [2024-11-19T02:03:31.802Z] =================================================================================================================== 00:22:21.187 [2024-11-19T02:03:31.802Z] Total : 6129.58 23.94 4137.62 0.00 12443.34 0.00 3019898.88 00:22:21.187 { 00:22:21.187 "results": [ 00:22:21.187 { 00:22:21.187 "job": "NVMe0n1", 00:22:21.187 "core_mask": "0x4", 00:22:21.187 "workload": "verify", 00:22:21.187 "status": "finished", 00:22:21.187 "verify_range": { 00:22:21.187 "start": 0, 00:22:21.187 "length": 16384 00:22:21.187 }, 00:22:21.187 "queue_depth": 128, 00:22:21.187 "io_size": 4096, 00:22:21.187 "runtime": 10.007206, 00:22:21.187 "iops": 6129.583022474005, 00:22:21.187 "mibps": 23.94368368153908, 00:22:21.187 "io_failed": 41406, 00:22:21.187 "io_timeout": 0, 00:22:21.187 "avg_latency_us": 12443.34314643525, 00:22:21.187 "min_latency_us": 614.4, 00:22:21.187 "max_latency_us": 3019898.88 00:22:21.187 } 00:22:21.187 ], 00:22:21.187 "core_count": 1 00:22:21.187 } 00:22:21.187 02:03:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 96673 00:22:21.187 02:03:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 96673 ']' 00:22:21.187 02:03:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 96673 00:22:21.187 02:03:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:22:21.187 02:03:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:21.187 02:03:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96673 00:22:21.187 02:03:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:21.187 02:03:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:21.187 killing process with pid 96673 00:22:21.187 02:03:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96673' 00:22:21.187 Received shutdown signal, test time was about 10.000000 seconds 00:22:21.187 00:22:21.187 Latency(us) 00:22:21.187 [2024-11-19T02:03:31.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.187 [2024-11-19T02:03:31.802Z] =================================================================================================================== 00:22:21.187 [2024-11-19T02:03:31.802Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:21.187 02:03:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 96673 00:22:21.187 02:03:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 96673 00:22:21.455 02:03:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:22:21.455 02:03:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=96905 00:22:21.455 02:03:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 96905 /var/tmp/bdevperf.sock 00:22:21.455 02:03:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 96905 ']' 00:22:21.455 02:03:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:21.455 02:03:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:21.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:21.455 02:03:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:21.455 02:03:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:21.455 02:03:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:21.455 [2024-11-19 02:03:31.952091] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:22:21.455 [2024-11-19 02:03:31.952205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96905 ] 00:22:21.713 [2024-11-19 02:03:32.098522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.713 [2024-11-19 02:03:32.117324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:21.713 [2024-11-19 02:03:32.144408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:21.713 02:03:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:21.713 02:03:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:22:21.713 02:03:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=96912 00:22:21.713 02:03:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96905 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:22:21.713 02:03:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:22:21.971 02:03:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:22.229 NVMe0n1 00:22:22.229 02:03:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=96954 00:22:22.229 02:03:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:22.229 02:03:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:22:22.488 Running I/O for 10 seconds... 00:22:23.427 02:03:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:23.427 17272.00 IOPS, 67.47 MiB/s [2024-11-19T02:03:34.042Z] [2024-11-19 02:03:33.994217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.427 [2024-11-19 02:03:33.994279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.427 [2024-11-19 02:03:33.994305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.427 [2024-11-19 02:03:33.994328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.427 [2024-11-19 02:03:33.994350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.427 [2024-11-19 02:03:33.994358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with [2024-11-19 02:03:33.994406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(6) to be set 00:22:23.428 id:0 cdw10:00000000 cdw11:00000000 00:22:23.428 [2024-11-19 02:03:33.994424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.428 [2024-11-19 02:03:33.994440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.428 [2024-11-19 02:03:33.994447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.428 [2024-11-19 02:03:33.994455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with [2024-11-19 02:03:33.994462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(6) to be set 00:22:23.428 id:0 cdw10:00000000 cdw11:00000000 00:22:23.428 [2024-11-19 02:03:33.994469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.428 [2024-11-19 02:03:33.994477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.428 [2024-11-19 02:03:33.994484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.428 [2024-11-19 02:03:33.994491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1380180 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.428 [2024-11-19 02:03:33.994953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.994960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.994967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.994975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.994982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.994990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.994997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x159e860 is same with the state(6) to be set 00:22:23.429 [2024-11-19 02:03:33.995338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.429 [2024-11-19 02:03:33.995355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.429 [2024-11-19 02:03:33.995373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.429 [2024-11-19 02:03:33.995382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.429 [2024-11-19 02:03:33.995393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:119088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.429 [2024-11-19 02:03:33.995401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.429 [2024-11-19 02:03:33.995411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.429 [2024-11-19 02:03:33.995420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.429 [2024-11-19 02:03:33.995430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.429 [2024-11-19 02:03:33.995439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.429 [2024-11-19 02:03:33.995449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:108032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.429 [2024-11-19 02:03:33.995457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.429 [2024-11-19 02:03:33.995467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:101400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.429 [2024-11-19 02:03:33.995475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.429 [2024-11-19 02:03:33.995485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.429 [2024-11-19 02:03:33.995493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.429 [2024-11-19 02:03:33.995515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:70776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.429 [2024-11-19 02:03:33.995524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.429 [2024-11-19 02:03:33.995534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:115448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.429 [2024-11-19 02:03:33.995542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.429 [2024-11-19 02:03:33.995552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.429 [2024-11-19 02:03:33.995561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.429 [2024-11-19 02:03:33.995571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:85384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.429 [2024-11-19 02:03:33.995579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.429 [2024-11-19 02:03:33.995591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.429 [2024-11-19 02:03:33.995599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.429 [2024-11-19 02:03:33.995610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.429 [2024-11-19 02:03:33.995618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.429 [2024-11-19 02:03:33.995628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.429 [2024-11-19 02:03:33.995637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.429 [2024-11-19 02:03:33.995647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.429 [2024-11-19 02:03:33.995655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.429 [2024-11-19 02:03:33.995665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:54512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.429 [2024-11-19 02:03:33.995674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.429 [2024-11-19 02:03:33.995685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:87288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.429 [2024-11-19 02:03:33.995693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.995703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:130912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.995712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.995722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.995730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.995740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.995748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.995758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.995766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.995776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:62088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.995785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.995795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:120352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.995803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.995813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.995821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.995831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.995840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.995849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:34416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.995858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.995867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.995876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.995892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.995900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.995910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:72440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.995919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.995929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.995937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.995947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:76768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.995956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.995966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:114216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.995974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.995984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:51872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.995992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.996002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:68344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.996010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.996020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.996028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.996038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:115608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.996046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.996056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.996065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.996075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.996083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.996093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:90576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.996101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.996111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.996120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.996130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:114832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.996138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.996148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:117016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.996156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.996166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.996175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.996188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.996196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.996206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:52768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.996214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.996225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.996233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.996243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.996251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.996261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:51376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.996269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.996279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:67432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.996287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.996297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.996305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.996315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.996324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.996334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:71696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.996342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.996352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.996360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.996370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:118120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.996379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.996389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:58096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.996397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.996407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:126136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.430 [2024-11-19 02:03:33.996415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.430 [2024-11-19 02:03:33.996425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.996434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.996444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.996452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.996462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.996470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.996483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.996492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.996526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.996536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.996562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.996571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.996581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:83032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.996590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.996601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:50552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.996610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.996621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:128072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.996629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.996640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.996649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.996660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:52208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.996669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.996680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.996688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.996699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.996708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.996719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.996727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.996738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.996747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.996758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:26888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.996766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.996777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:44168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.996786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.996796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:48160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.996805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.996816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.996825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.996838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.996847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.996858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:76528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.996867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.996877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:123056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.996886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.996897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:119944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.996906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.996930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:66952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.996953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.996964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.996972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.996982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.996990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.997000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.997009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.997019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.997027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.997037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:104896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.997045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.997055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.997063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.997073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:43408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.997082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.997092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:90392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.997100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.997110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:86632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.997118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.997128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:36480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.997137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.997147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:124280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.997155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.997169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:37512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.997177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.997187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.997196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.997206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:114280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.997214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.997224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:26800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.997232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.431 [2024-11-19 02:03:33.997242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.431 [2024-11-19 02:03:33.997250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.432 [2024-11-19 02:03:33.997260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:123456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.432 [2024-11-19 02:03:33.997268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.432 [2024-11-19 02:03:33.997279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:62296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.432 [2024-11-19 02:03:33.997288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.432 [2024-11-19 02:03:33.997298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.432 [2024-11-19 02:03:33.997306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.432 [2024-11-19 02:03:33.997316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:67760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.432 [2024-11-19 02:03:33.997325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.432 [2024-11-19 02:03:33.997335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:60080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.432 [2024-11-19 02:03:33.997343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.432 [2024-11-19 02:03:33.997353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:116616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.432 [2024-11-19 02:03:33.997361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.432 [2024-11-19 02:03:33.997372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.432 [2024-11-19 02:03:33.997380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.432 [2024-11-19 02:03:33.997390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:58864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.432 [2024-11-19 02:03:33.997398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.432 [2024-11-19 02:03:33.997408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.432 [2024-11-19 02:03:33.997416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.432 [2024-11-19 02:03:33.997426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:119488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.432 [2024-11-19 02:03:33.997435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.432 [2024-11-19 02:03:33.997445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:55296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.432 [2024-11-19 02:03:33.997453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.432 [2024-11-19 02:03:33.997466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:89656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.432 [2024-11-19 02:03:33.997475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.432 [2024-11-19 02:03:33.997485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.432 [2024-11-19 02:03:33.997494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.432 [2024-11-19 02:03:33.997504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:105040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.432 [2024-11-19 02:03:33.997512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.432 [2024-11-19 02:03:33.997523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:88528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.432 [2024-11-19 02:03:33.997531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.432 [2024-11-19 02:03:33.997541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.432 [2024-11-19 02:03:33.997550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.432 [2024-11-19 02:03:33.997569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.432 [2024-11-19 02:03:33.997578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.432 [2024-11-19 02:03:33.997588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:57576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.432 [2024-11-19 02:03:33.997596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.432 [2024-11-19 02:03:33.997606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.432 [2024-11-19 02:03:33.997615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.432 [2024-11-19 02:03:33.997625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.432 [2024-11-19 02:03:33.997633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.432 [2024-11-19 02:03:33.997643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:120056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.432 [2024-11-19 02:03:33.997651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.432 [2024-11-19 02:03:33.997661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:71400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.432 [2024-11-19 02:03:33.997669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.432 [2024-11-19 02:03:33.997679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:102920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.432 [2024-11-19 02:03:33.997687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.432 [2024-11-19 02:03:33.997697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:43152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.432 [2024-11-19 02:03:33.997705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.432 [2024-11-19 02:03:33.997715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:27360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.432 [2024-11-19 02:03:33.997724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.432 [2024-11-19 02:03:33.997734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.432 [2024-11-19 02:03:33.997742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.432 [2024-11-19 02:03:33.997752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.432 [2024-11-19 02:03:33.997760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.432 [2024-11-19 02:03:33.997774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.432 [2024-11-19 02:03:33.997783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.432 [2024-11-19 02:03:33.997793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:50536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.432 [2024-11-19 02:03:33.997802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.432 [2024-11-19 02:03:33.997812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.432 [2024-11-19 02:03:33.997820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.432 [2024-11-19 02:03:33.997829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1d70 is same with the state(6) to be set 00:22:23.432 [2024-11-19 02:03:33.997840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.432 [2024-11-19 02:03:33.997847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.432 [2024-11-19 02:03:33.997855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32992 len:8 PRP1 0x0 PRP2 0x0 00:22:23.432 [2024-11-19 02:03:33.997863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.432 [2024-11-19 02:03:33.998207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:23.433 [2024-11-19 02:03:33.998258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1380180 (9): Bad file descriptor 00:22:23.433 [2024-11-19 02:03:33.998390] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.433 [2024-11-19 02:03:33.998421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1380180 with addr=10.0.0.3, port=4420 00:22:23.433 [2024-11-19 02:03:33.998433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1380180 is same with the state(6) to be set 00:22:23.433 [2024-11-19 02:03:33.998451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1380180 (9): Bad file descriptor 00:22:23.433 [2024-11-19 02:03:33.998466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:23.433 [2024-11-19 02:03:33.998475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:23.433 [2024-11-19 02:03:33.998485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:23.433 [2024-11-19 02:03:33.998495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:22:23.433 [2024-11-19 02:03:33.998540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:23.433 02:03:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 96954 00:22:25.306 9653.00 IOPS, 37.71 MiB/s [2024-11-19T02:03:36.181Z] 6435.33 IOPS, 25.14 MiB/s [2024-11-19T02:03:36.181Z] [2024-11-19 02:03:36.016664] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.566 [2024-11-19 02:03:36.016744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1380180 with addr=10.0.0.3, port=4420 00:22:25.566 [2024-11-19 02:03:36.016761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1380180 is same with the state(6) to be set 00:22:25.566 [2024-11-19 02:03:36.016784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1380180 (9): Bad file descriptor 00:22:25.566 [2024-11-19 02:03:36.016802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:25.566 [2024-11-19 02:03:36.016812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:25.566 [2024-11-19 02:03:36.016822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:25.566 [2024-11-19 02:03:36.016833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:22:25.566 [2024-11-19 02:03:36.016843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:27.440 4826.50 IOPS, 18.85 MiB/s [2024-11-19T02:03:38.055Z] 3861.20 IOPS, 15.08 MiB/s [2024-11-19T02:03:38.055Z] [2024-11-19 02:03:38.017013] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.440 [2024-11-19 02:03:38.017092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1380180 with addr=10.0.0.3, port=4420 00:22:27.440 [2024-11-19 02:03:38.017108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1380180 is same with the state(6) to be set 00:22:27.440 [2024-11-19 02:03:38.017130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1380180 (9): Bad file descriptor 00:22:27.440 [2024-11-19 02:03:38.017168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:27.440 [2024-11-19 02:03:38.017184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:27.440 [2024-11-19 02:03:38.017195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:27.440 [2024-11-19 02:03:38.017207] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:22:27.440 [2024-11-19 02:03:38.017218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:29.316 3217.67 IOPS, 12.57 MiB/s [2024-11-19T02:03:40.190Z] 2758.00 IOPS, 10.77 MiB/s [2024-11-19T02:03:40.190Z] [2024-11-19 02:03:40.017296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:29.575 [2024-11-19 02:03:40.017355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:29.575 [2024-11-19 02:03:40.017367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:29.575 [2024-11-19 02:03:40.017378] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:22:29.575 [2024-11-19 02:03:40.017390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:22:30.513 2413.25 IOPS, 9.43 MiB/s 00:22:30.513 Latency(us) 00:22:30.513 [2024-11-19T02:03:41.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.513 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:22:30.513 NVMe0n1 : 8.13 2374.62 9.28 15.74 0.00 53469.25 6940.86 7015926.69 00:22:30.513 [2024-11-19T02:03:41.128Z] =================================================================================================================== 00:22:30.513 [2024-11-19T02:03:41.128Z] Total : 2374.62 9.28 15.74 0.00 53469.25 6940.86 7015926.69 00:22:30.513 { 00:22:30.513 "results": [ 00:22:30.513 { 00:22:30.513 "job": "NVMe0n1", 00:22:30.513 "core_mask": "0x4", 00:22:30.513 "workload": "randread", 00:22:30.513 "status": "finished", 00:22:30.513 "queue_depth": 128, 00:22:30.513 "io_size": 4096, 00:22:30.513 "runtime": 8.130153, 00:22:30.513 "iops": 2374.617058252163, 00:22:30.513 "mibps": 9.275847883797512, 00:22:30.513 "io_failed": 128, 00:22:30.513 "io_timeout": 0, 00:22:30.513 "avg_latency_us": 53469.24998849251, 00:22:30.513 "min_latency_us": 6940.858181818182, 00:22:30.513 "max_latency_us": 7015926.69090909 00:22:30.513 } 00:22:30.513 ], 00:22:30.513 "core_count": 1 00:22:30.513 } 00:22:30.513 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:30.513 Attaching 5 probes... 00:22:30.513 1287.744101: reset bdev controller NVMe0 00:22:30.513 1287.840672: reconnect bdev controller NVMe0 00:22:30.513 3306.092997: reconnect delay bdev controller NVMe0 00:22:30.513 3306.126262: reconnect bdev controller NVMe0 00:22:30.513 5306.416900: reconnect delay bdev controller NVMe0 00:22:30.513 5306.447399: reconnect bdev controller NVMe0 00:22:30.513 7306.804180: reconnect delay bdev controller NVMe0 00:22:30.513 7306.837555: reconnect bdev controller NVMe0 00:22:30.513 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:22:30.513 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:22:30.514 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 96912 00:22:30.514 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:30.514 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 96905 00:22:30.514 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 96905 ']' 00:22:30.514 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 96905 00:22:30.514 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:22:30.514 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:30.514 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96905 00:22:30.514 killing process with pid 96905 00:22:30.514 Received shutdown signal, test time was about 8.201770 seconds 00:22:30.514 00:22:30.514 Latency(us) 00:22:30.514 [2024-11-19T02:03:41.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.514 [2024-11-19T02:03:41.129Z] =================================================================================================================== 00:22:30.514 [2024-11-19T02:03:41.129Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:30.514 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:30.514 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:30.514 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96905' 00:22:30.514 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 96905 00:22:30.514 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 96905 00:22:30.773 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:31.033 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:22:31.033 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:22:31.033 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:31.033 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:22:31.033 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:31.033 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:22:31.033 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:31.033 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:31.033 rmmod nvme_tcp 00:22:31.033 rmmod nvme_fabrics 00:22:31.033 rmmod nvme_keyring 00:22:31.033 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:31.033 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:22:31.033 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:22:31.033 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 96485 ']' 00:22:31.033 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 96485 00:22:31.033 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 96485 ']' 00:22:31.033 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 96485 00:22:31.033 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:22:31.033 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:31.033 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96485 00:22:31.033 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:31.033 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:31.033 killing process with pid 96485 00:22:31.033 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96485' 00:22:31.033 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 96485 00:22:31.033 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 96485 00:22:31.293 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:31.293 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:31.293 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:31.293 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:22:31.293 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:22:31.293 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:22:31.293 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:31.293 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:31.293 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:31.293 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:31.293 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:31.293 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:31.293 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:31.293 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:31.293 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:31.293 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:31.293 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:31.293 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:31.293 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:31.293 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:31.293 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:31.553 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:31.553 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:31.553 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.553 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:31.553 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.553 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:22:31.553 00:22:31.553 real 0m44.990s 00:22:31.553 user 2m12.015s 00:22:31.553 sys 0m5.245s 00:22:31.553 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:31.553 02:03:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:31.553 ************************************ 00:22:31.553 END TEST nvmf_timeout 00:22:31.553 ************************************ 00:22:31.553 02:03:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:22:31.553 02:03:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:22:31.553 00:22:31.553 real 5m41.461s 00:22:31.553 user 16m0.217s 00:22:31.553 sys 1m15.190s 00:22:31.553 02:03:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:31.553 02:03:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.553 ************************************ 00:22:31.553 END TEST nvmf_host 00:22:31.553 ************************************ 00:22:31.553 02:03:42 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:22:31.553 02:03:42 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:22:31.553 00:22:31.553 real 15m1.807s 00:22:31.553 user 39m26.727s 00:22:31.553 sys 4m3.937s 00:22:31.553 02:03:42 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:31.553 ************************************ 00:22:31.553 02:03:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:31.553 END TEST nvmf_tcp 00:22:31.553 ************************************ 00:22:31.553 02:03:42 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:22:31.553 02:03:42 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:22:31.553 02:03:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:31.553 02:03:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:31.553 02:03:42 -- common/autotest_common.sh@10 -- # set +x 00:22:31.553 ************************************ 00:22:31.553 START TEST nvmf_dif 00:22:31.553 ************************************ 00:22:31.553 02:03:42 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:22:31.813 * Looking for test storage... 00:22:31.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:31.813 02:03:42 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:31.813 02:03:42 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:31.813 02:03:42 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:22:31.813 02:03:42 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:31.813 02:03:42 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:31.813 02:03:42 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:31.813 02:03:42 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:31.813 02:03:42 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:22:31.813 02:03:42 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:22:31.813 02:03:42 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:22:31.813 02:03:42 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:22:31.813 02:03:42 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:22:31.813 02:03:42 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:22:31.813 02:03:42 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:22:31.813 02:03:42 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:31.813 02:03:42 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:22:31.813 02:03:42 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:22:31.813 02:03:42 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:31.813 02:03:42 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:31.813 02:03:42 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:22:31.813 02:03:42 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:22:31.813 02:03:42 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:31.813 02:03:42 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:22:31.813 02:03:42 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:22:31.813 02:03:42 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:22:31.813 02:03:42 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:22:31.813 02:03:42 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:31.813 02:03:42 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:22:31.813 02:03:42 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:22:31.813 02:03:42 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:31.813 02:03:42 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:31.813 02:03:42 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:22:31.813 02:03:42 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:31.814 02:03:42 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:31.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.814 --rc genhtml_branch_coverage=1 00:22:31.814 --rc genhtml_function_coverage=1 00:22:31.814 --rc genhtml_legend=1 00:22:31.814 --rc geninfo_all_blocks=1 00:22:31.814 --rc geninfo_unexecuted_blocks=1 00:22:31.814 00:22:31.814 ' 00:22:31.814 02:03:42 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:31.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.814 --rc genhtml_branch_coverage=1 00:22:31.814 --rc genhtml_function_coverage=1 00:22:31.814 --rc genhtml_legend=1 00:22:31.814 --rc geninfo_all_blocks=1 00:22:31.814 --rc geninfo_unexecuted_blocks=1 00:22:31.814 00:22:31.814 ' 00:22:31.814 02:03:42 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:31.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.814 --rc genhtml_branch_coverage=1 00:22:31.814 --rc genhtml_function_coverage=1 00:22:31.814 --rc genhtml_legend=1 00:22:31.814 --rc geninfo_all_blocks=1 00:22:31.814 --rc geninfo_unexecuted_blocks=1 00:22:31.814 00:22:31.814 ' 00:22:31.814 02:03:42 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:31.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.814 --rc genhtml_branch_coverage=1 00:22:31.814 --rc genhtml_function_coverage=1 00:22:31.814 --rc genhtml_legend=1 00:22:31.814 --rc geninfo_all_blocks=1 00:22:31.814 --rc geninfo_unexecuted_blocks=1 00:22:31.814 00:22:31.814 ' 00:22:31.814 02:03:42 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:31.814 02:03:42 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:22:31.814 02:03:42 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:31.814 02:03:42 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:31.814 02:03:42 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:31.814 02:03:42 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.814 02:03:42 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.814 02:03:42 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.814 02:03:42 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:22:31.814 02:03:42 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:31.814 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:31.814 02:03:42 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:22:31.814 02:03:42 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:22:31.814 02:03:42 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:22:31.814 02:03:42 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:22:31.814 02:03:42 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.814 02:03:42 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:31.814 02:03:42 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:31.814 Cannot find device "nvmf_init_br" 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@162 -- # true 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:31.814 Cannot find device "nvmf_init_br2" 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@163 -- # true 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:31.814 Cannot find device "nvmf_tgt_br" 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@164 -- # true 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:31.814 Cannot find device "nvmf_tgt_br2" 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@165 -- # true 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:31.814 Cannot find device "nvmf_init_br" 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@166 -- # true 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:31.814 Cannot find device "nvmf_init_br2" 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@167 -- # true 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:31.814 Cannot find device "nvmf_tgt_br" 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@168 -- # true 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:31.814 Cannot find device "nvmf_tgt_br2" 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@169 -- # true 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:31.814 Cannot find device "nvmf_br" 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@170 -- # true 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:31.814 Cannot find device "nvmf_init_if" 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@171 -- # true 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:31.814 Cannot find device "nvmf_init_if2" 00:22:31.814 02:03:42 nvmf_dif -- nvmf/common.sh@172 -- # true 00:22:31.815 02:03:42 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:31.815 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:31.815 02:03:42 nvmf_dif -- nvmf/common.sh@173 -- # true 00:22:31.815 02:03:42 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:32.073 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:32.073 02:03:42 nvmf_dif -- nvmf/common.sh@174 -- # true 00:22:32.073 02:03:42 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:32.073 02:03:42 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:32.073 02:03:42 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:32.073 02:03:42 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:32.073 02:03:42 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:32.073 02:03:42 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:32.073 02:03:42 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:32.073 02:03:42 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:32.073 02:03:42 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:32.073 02:03:42 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:32.073 02:03:42 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:32.073 02:03:42 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:32.073 02:03:42 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:32.073 02:03:42 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:32.073 02:03:42 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:32.073 02:03:42 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:32.073 02:03:42 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:32.073 02:03:42 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:32.073 02:03:42 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:32.073 02:03:42 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:32.074 02:03:42 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:32.074 02:03:42 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:32.074 02:03:42 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:32.074 02:03:42 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:32.074 02:03:42 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:32.074 02:03:42 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:32.074 02:03:42 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:32.074 02:03:42 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:32.074 02:03:42 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:32.074 02:03:42 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:32.074 02:03:42 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:32.074 02:03:42 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:32.333 02:03:42 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:32.333 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:32.333 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:22:32.333 00:22:32.333 --- 10.0.0.3 ping statistics --- 00:22:32.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.333 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:22:32.333 02:03:42 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:32.333 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:32.333 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:22:32.333 00:22:32.333 --- 10.0.0.4 ping statistics --- 00:22:32.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.333 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:22:32.333 02:03:42 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:32.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:32.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:22:32.333 00:22:32.333 --- 10.0.0.1 ping statistics --- 00:22:32.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.333 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:22:32.333 02:03:42 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:32.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:32.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:22:32.333 00:22:32.333 --- 10.0.0.2 ping statistics --- 00:22:32.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.333 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:22:32.333 02:03:42 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:32.333 02:03:42 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:22:32.333 02:03:42 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:22:32.333 02:03:42 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:32.592 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:32.592 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:32.593 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:32.593 02:03:43 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:32.593 02:03:43 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:32.593 02:03:43 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:32.593 02:03:43 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:32.593 02:03:43 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:32.593 02:03:43 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:32.593 02:03:43 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:22:32.593 02:03:43 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:22:32.593 02:03:43 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:32.593 02:03:43 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:32.593 02:03:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:32.593 02:03:43 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=97443 00:22:32.593 02:03:43 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 97443 00:22:32.593 02:03:43 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 97443 ']' 00:22:32.593 02:03:43 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.593 02:03:43 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:32.593 02:03:43 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:32.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.593 02:03:43 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.593 02:03:43 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:32.593 02:03:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:32.593 [2024-11-19 02:03:43.193979] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:22:32.593 [2024-11-19 02:03:43.194073] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.852 [2024-11-19 02:03:43.345840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.852 [2024-11-19 02:03:43.370215] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.852 [2024-11-19 02:03:43.370275] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.852 [2024-11-19 02:03:43.370289] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:32.852 [2024-11-19 02:03:43.370300] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:32.852 [2024-11-19 02:03:43.370309] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.852 [2024-11-19 02:03:43.370685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.852 [2024-11-19 02:03:43.407102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:32.852 02:03:43 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:32.852 02:03:43 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:22:32.852 02:03:43 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:32.852 02:03:43 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:32.852 02:03:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:33.112 02:03:43 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.112 02:03:43 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:22:33.112 02:03:43 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:22:33.112 02:03:43 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.112 02:03:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:33.112 [2024-11-19 02:03:43.502921] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.112 02:03:43 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.112 02:03:43 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:22:33.112 02:03:43 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:33.112 02:03:43 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:33.112 02:03:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:33.112 ************************************ 00:22:33.112 START TEST fio_dif_1_default 00:22:33.112 ************************************ 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:33.112 bdev_null0 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:33.112 [2024-11-19 02:03:43.547080] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:33.112 { 00:22:33.112 "params": { 00:22:33.112 "name": "Nvme$subsystem", 00:22:33.112 "trtype": "$TEST_TRANSPORT", 00:22:33.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:33.112 "adrfam": "ipv4", 00:22:33.112 "trsvcid": "$NVMF_PORT", 00:22:33.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:33.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:33.112 "hdgst": ${hdgst:-false}, 00:22:33.112 "ddgst": ${ddgst:-false} 00:22:33.112 }, 00:22:33.112 "method": "bdev_nvme_attach_controller" 00:22:33.112 } 00:22:33.112 EOF 00:22:33.112 )") 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:33.112 "params": { 00:22:33.112 "name": "Nvme0", 00:22:33.112 "trtype": "tcp", 00:22:33.112 "traddr": "10.0.0.3", 00:22:33.112 "adrfam": "ipv4", 00:22:33.112 "trsvcid": "4420", 00:22:33.112 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:33.112 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:33.112 "hdgst": false, 00:22:33.112 "ddgst": false 00:22:33.112 }, 00:22:33.112 "method": "bdev_nvme_attach_controller" 00:22:33.112 }' 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:33.112 02:03:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:33.371 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:33.371 fio-3.35 00:22:33.371 Starting 1 thread 00:22:45.583 00:22:45.583 filename0: (groupid=0, jobs=1): err= 0: pid=97497: Tue Nov 19 02:03:54 2024 00:22:45.583 read: IOPS=10.0k, BW=39.2MiB/s (41.1MB/s)(392MiB/10001msec) 00:22:45.583 slat (usec): min=5, max=483, avg= 7.86, stdev= 3.88 00:22:45.583 clat (usec): min=312, max=2902, avg=374.45, stdev=43.69 00:22:45.583 lat (usec): min=318, max=2931, avg=382.31, stdev=44.61 00:22:45.583 clat percentiles (usec): 00:22:45.583 | 1.00th=[ 318], 5.00th=[ 322], 10.00th=[ 330], 20.00th=[ 343], 00:22:45.583 | 30.00th=[ 351], 40.00th=[ 363], 50.00th=[ 371], 60.00th=[ 379], 00:22:45.583 | 70.00th=[ 388], 80.00th=[ 400], 90.00th=[ 424], 95.00th=[ 445], 00:22:45.583 | 99.00th=[ 502], 99.50th=[ 523], 99.90th=[ 578], 99.95th=[ 685], 00:22:45.583 | 99.99th=[ 963] 00:22:45.583 bw ( KiB/s): min=38496, max=41536, per=100.00%, avg=40225.68, stdev=764.40, samples=19 00:22:45.583 iops : min= 9624, max=10384, avg=10056.42, stdev=191.10, samples=19 00:22:45.583 lat (usec) : 500=99.00%, 750=0.97%, 1000=0.03% 00:22:45.583 lat (msec) : 2=0.01%, 4=0.01% 00:22:45.583 cpu : usr=84.77%, sys=13.03%, ctx=139, majf=0, minf=4 00:22:45.583 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:45.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:45.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:45.583 issued rwts: total=100420,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:45.583 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:45.583 00:22:45.583 Run status group 0 (all jobs): 00:22:45.583 READ: bw=39.2MiB/s (41.1MB/s), 39.2MiB/s-39.2MiB/s (41.1MB/s-41.1MB/s), io=392MiB (411MB), run=10001-10001msec 00:22:45.583 02:03:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:22:45.583 02:03:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:22:45.583 02:03:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:22:45.583 02:03:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:45.583 02:03:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:22:45.583 02:03:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:45.583 02:03:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.583 02:03:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:45.583 02:03:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.584 00:22:45.584 real 0m10.876s 00:22:45.584 user 0m9.052s 00:22:45.584 sys 0m1.529s 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:45.584 ************************************ 00:22:45.584 END TEST fio_dif_1_default 00:22:45.584 ************************************ 00:22:45.584 02:03:54 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:22:45.584 02:03:54 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:45.584 02:03:54 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:45.584 02:03:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:45.584 ************************************ 00:22:45.584 START TEST fio_dif_1_multi_subsystems 00:22:45.584 ************************************ 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:45.584 bdev_null0 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:45.584 [2024-11-19 02:03:54.472559] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:45.584 bdev_null1 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.584 { 00:22:45.584 "params": { 00:22:45.584 "name": "Nvme$subsystem", 00:22:45.584 "trtype": "$TEST_TRANSPORT", 00:22:45.584 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.584 "adrfam": "ipv4", 00:22:45.584 "trsvcid": "$NVMF_PORT", 00:22:45.584 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.584 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.584 "hdgst": ${hdgst:-false}, 00:22:45.584 "ddgst": ${ddgst:-false} 00:22:45.584 }, 00:22:45.584 "method": "bdev_nvme_attach_controller" 00:22:45.584 } 00:22:45.584 EOF 00:22:45.584 )") 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.584 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.584 { 00:22:45.584 "params": { 00:22:45.585 "name": "Nvme$subsystem", 00:22:45.585 "trtype": "$TEST_TRANSPORT", 00:22:45.585 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.585 "adrfam": "ipv4", 00:22:45.585 "trsvcid": "$NVMF_PORT", 00:22:45.585 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.585 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.585 "hdgst": ${hdgst:-false}, 00:22:45.585 "ddgst": ${ddgst:-false} 00:22:45.585 }, 00:22:45.585 "method": "bdev_nvme_attach_controller" 00:22:45.585 } 00:22:45.585 EOF 00:22:45.585 )") 00:22:45.585 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:22:45.585 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:22:45.585 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:22:45.585 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:22:45.585 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:22:45.585 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:45.585 "params": { 00:22:45.585 "name": "Nvme0", 00:22:45.585 "trtype": "tcp", 00:22:45.585 "traddr": "10.0.0.3", 00:22:45.585 "adrfam": "ipv4", 00:22:45.585 "trsvcid": "4420", 00:22:45.585 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:45.585 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:45.585 "hdgst": false, 00:22:45.585 "ddgst": false 00:22:45.585 }, 00:22:45.585 "method": "bdev_nvme_attach_controller" 00:22:45.585 },{ 00:22:45.585 "params": { 00:22:45.585 "name": "Nvme1", 00:22:45.585 "trtype": "tcp", 00:22:45.585 "traddr": "10.0.0.3", 00:22:45.585 "adrfam": "ipv4", 00:22:45.585 "trsvcid": "4420", 00:22:45.585 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.585 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:45.585 "hdgst": false, 00:22:45.585 "ddgst": false 00:22:45.585 }, 00:22:45.585 "method": "bdev_nvme_attach_controller" 00:22:45.585 }' 00:22:45.585 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:45.585 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:45.585 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:45.585 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:45.585 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:45.585 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:45.585 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:45.585 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:45.585 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:45.585 02:03:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:45.585 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:45.585 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:45.585 fio-3.35 00:22:45.585 Starting 2 threads 00:22:55.570 00:22:55.570 filename0: (groupid=0, jobs=1): err= 0: pid=97661: Tue Nov 19 02:04:05 2024 00:22:55.570 read: IOPS=5357, BW=20.9MiB/s (21.9MB/s)(209MiB/10001msec) 00:22:55.570 slat (nsec): min=6299, max=61029, avg=12763.31, stdev=4454.12 00:22:55.570 clat (usec): min=555, max=1658, avg=712.10, stdev=60.70 00:22:55.570 lat (usec): min=562, max=1671, avg=724.86, stdev=61.79 00:22:55.570 clat percentiles (usec): 00:22:55.570 | 1.00th=[ 594], 5.00th=[ 627], 10.00th=[ 644], 20.00th=[ 668], 00:22:55.570 | 30.00th=[ 676], 40.00th=[ 693], 50.00th=[ 701], 60.00th=[ 717], 00:22:55.570 | 70.00th=[ 734], 80.00th=[ 758], 90.00th=[ 791], 95.00th=[ 816], 00:22:55.570 | 99.00th=[ 898], 99.50th=[ 930], 99.90th=[ 1004], 99.95th=[ 1106], 00:22:55.570 | 99.99th=[ 1450] 00:22:55.570 bw ( KiB/s): min=20928, max=21920, per=50.04%, avg=21450.11, stdev=268.38, samples=19 00:22:55.570 iops : min= 5232, max= 5480, avg=5362.53, stdev=67.09, samples=19 00:22:55.570 lat (usec) : 750=78.09%, 1000=21.80% 00:22:55.570 lat (msec) : 2=0.11% 00:22:55.570 cpu : usr=88.82%, sys=9.65%, ctx=35, majf=0, minf=0 00:22:55.570 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:55.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.570 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.570 issued rwts: total=53580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.570 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:55.570 filename1: (groupid=0, jobs=1): err= 0: pid=97662: Tue Nov 19 02:04:05 2024 00:22:55.570 read: IOPS=5359, BW=20.9MiB/s (21.9MB/s)(209MiB/10001msec) 00:22:55.570 slat (nsec): min=6283, max=79623, avg=12925.87, stdev=4510.48 00:22:55.570 clat (usec): min=394, max=1677, avg=710.96, stdev=54.97 00:22:55.570 lat (usec): min=401, max=1687, avg=723.89, stdev=55.72 00:22:55.570 clat percentiles (usec): 00:22:55.570 | 1.00th=[ 627], 5.00th=[ 644], 10.00th=[ 652], 20.00th=[ 668], 00:22:55.571 | 30.00th=[ 676], 40.00th=[ 693], 50.00th=[ 701], 60.00th=[ 717], 00:22:55.571 | 70.00th=[ 725], 80.00th=[ 750], 90.00th=[ 783], 95.00th=[ 816], 00:22:55.571 | 99.00th=[ 889], 99.50th=[ 914], 99.90th=[ 971], 99.95th=[ 996], 00:22:55.571 | 99.99th=[ 1434] 00:22:55.571 bw ( KiB/s): min=20928, max=21920, per=50.06%, avg=21458.53, stdev=267.16, samples=19 00:22:55.571 iops : min= 5232, max= 5480, avg=5364.63, stdev=66.79, samples=19 00:22:55.571 lat (usec) : 500=0.02%, 750=80.35%, 1000=19.58% 00:22:55.571 lat (msec) : 2=0.04% 00:22:55.571 cpu : usr=89.60%, sys=8.98%, ctx=15, majf=0, minf=0 00:22:55.571 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:55.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.571 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:55.571 issued rwts: total=53596,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:55.571 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:55.571 00:22:55.571 Run status group 0 (all jobs): 00:22:55.571 READ: bw=41.9MiB/s (43.9MB/s), 20.9MiB/s-20.9MiB/s (21.9MB/s-21.9MB/s), io=419MiB (439MB), run=10001-10001msec 00:22:55.571 02:04:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:22:55.571 02:04:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:22:55.571 02:04:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:22:55.571 02:04:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:55.571 02:04:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:22:55.571 02:04:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:55.571 02:04:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.571 02:04:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:55.571 02:04:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.571 02:04:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:55.571 02:04:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.571 02:04:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:55.571 02:04:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.571 02:04:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:22:55.571 02:04:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:55.571 02:04:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:22:55.571 02:04:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:55.571 02:04:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.571 02:04:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:55.571 02:04:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.571 02:04:05 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:55.571 02:04:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.571 02:04:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:55.571 02:04:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.571 00:22:55.571 real 0m10.985s 00:22:55.571 user 0m18.508s 00:22:55.571 sys 0m2.094s 00:22:55.571 02:04:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:55.571 02:04:05 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:55.571 ************************************ 00:22:55.571 END TEST fio_dif_1_multi_subsystems 00:22:55.571 ************************************ 00:22:55.571 02:04:05 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:22:55.571 02:04:05 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:55.571 02:04:05 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:55.571 02:04:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:55.571 ************************************ 00:22:55.571 START TEST fio_dif_rand_params 00:22:55.571 ************************************ 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:55.571 bdev_null0 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:55.571 [2024-11-19 02:04:05.517390] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:55.571 { 00:22:55.571 "params": { 00:22:55.571 "name": "Nvme$subsystem", 00:22:55.571 "trtype": "$TEST_TRANSPORT", 00:22:55.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:55.571 "adrfam": "ipv4", 00:22:55.571 "trsvcid": "$NVMF_PORT", 00:22:55.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:55.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:55.571 "hdgst": ${hdgst:-false}, 00:22:55.571 "ddgst": ${ddgst:-false} 00:22:55.571 }, 00:22:55.571 "method": "bdev_nvme_attach_controller" 00:22:55.571 } 00:22:55.571 EOF 00:22:55.571 )") 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:22:55.571 02:04:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:55.572 02:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:55.572 02:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:22:55.572 02:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:55.572 02:04:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:22:55.572 02:04:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:22:55.572 02:04:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:55.572 "params": { 00:22:55.572 "name": "Nvme0", 00:22:55.572 "trtype": "tcp", 00:22:55.572 "traddr": "10.0.0.3", 00:22:55.572 "adrfam": "ipv4", 00:22:55.572 "trsvcid": "4420", 00:22:55.572 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:55.572 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:55.572 "hdgst": false, 00:22:55.572 "ddgst": false 00:22:55.572 }, 00:22:55.572 "method": "bdev_nvme_attach_controller" 00:22:55.572 }' 00:22:55.572 02:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:55.572 02:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:55.572 02:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:55.572 02:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:55.572 02:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:55.572 02:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:55.572 02:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:55.572 02:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:55.572 02:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:55.572 02:04:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:55.572 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:22:55.572 ... 00:22:55.572 fio-3.35 00:22:55.572 Starting 3 threads 00:23:00.843 00:23:00.843 filename0: (groupid=0, jobs=1): err= 0: pid=97813: Tue Nov 19 02:04:11 2024 00:23:00.843 read: IOPS=283, BW=35.4MiB/s (37.1MB/s)(177MiB/5008msec) 00:23:00.843 slat (nsec): min=6481, max=32887, avg=9117.45, stdev=3465.83 00:23:00.843 clat (usec): min=4941, max=12411, avg=10563.32, stdev=496.86 00:23:00.843 lat (usec): min=4950, max=12425, avg=10572.44, stdev=497.16 00:23:00.843 clat percentiles (usec): 00:23:00.843 | 1.00th=[10159], 5.00th=[10159], 10.00th=[10290], 20.00th=[10290], 00:23:00.843 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10421], 60.00th=[10552], 00:23:00.843 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11076], 95.00th=[11600], 00:23:00.843 | 99.00th=[12125], 99.50th=[12256], 99.90th=[12387], 99.95th=[12387], 00:23:00.843 | 99.99th=[12387] 00:23:00.843 bw ( KiB/s): min=35328, max=36864, per=33.36%, avg=36242.30, stdev=707.88, samples=10 00:23:00.843 iops : min= 276, max= 288, avg=283.10, stdev= 5.55, samples=10 00:23:00.843 lat (msec) : 10=0.21%, 20=99.79% 00:23:00.843 cpu : usr=90.83%, sys=8.61%, ctx=11, majf=0, minf=3 00:23:00.843 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:00.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.843 issued rwts: total=1419,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.843 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:00.843 filename0: (groupid=0, jobs=1): err= 0: pid=97814: Tue Nov 19 02:04:11 2024 00:23:00.843 read: IOPS=283, BW=35.4MiB/s (37.1MB/s)(177MiB/5003msec) 00:23:00.843 slat (nsec): min=6740, max=54450, avg=13643.10, stdev=4111.82 00:23:00.843 clat (usec): min=8870, max=13011, avg=10568.00, stdev=444.37 00:23:00.843 lat (usec): min=8882, max=13036, avg=10581.65, stdev=444.47 00:23:00.843 clat percentiles (usec): 00:23:00.844 | 1.00th=[10159], 5.00th=[10159], 10.00th=[10159], 20.00th=[10290], 00:23:00.844 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10421], 60.00th=[10552], 00:23:00.844 | 70.00th=[10683], 80.00th=[10814], 90.00th=[11076], 95.00th=[11469], 00:23:00.844 | 99.00th=[12256], 99.50th=[12387], 99.90th=[13042], 99.95th=[13042], 00:23:00.844 | 99.99th=[13042] 00:23:00.844 bw ( KiB/s): min=35328, max=37632, per=33.38%, avg=36266.67, stdev=839.35, samples=9 00:23:00.844 iops : min= 276, max= 294, avg=283.33, stdev= 6.56, samples=9 00:23:00.844 lat (msec) : 10=0.21%, 20=99.79% 00:23:00.844 cpu : usr=91.74%, sys=7.74%, ctx=90, majf=0, minf=0 00:23:00.844 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:00.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.844 issued rwts: total=1416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.844 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:00.844 filename0: (groupid=0, jobs=1): err= 0: pid=97815: Tue Nov 19 02:04:11 2024 00:23:00.844 read: IOPS=283, BW=35.4MiB/s (37.1MB/s)(177MiB/5003msec) 00:23:00.844 slat (nsec): min=6828, max=44726, avg=13059.22, stdev=4090.32 00:23:00.844 clat (usec): min=8878, max=13087, avg=10569.91, stdev=445.44 00:23:00.844 lat (usec): min=8890, max=13113, avg=10582.97, stdev=445.53 00:23:00.844 clat percentiles (usec): 00:23:00.844 | 1.00th=[10159], 5.00th=[10159], 10.00th=[10159], 20.00th=[10290], 00:23:00.844 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10421], 60.00th=[10552], 00:23:00.844 | 70.00th=[10683], 80.00th=[10814], 90.00th=[11076], 95.00th=[11469], 00:23:00.844 | 99.00th=[12256], 99.50th=[12387], 99.90th=[13042], 99.95th=[13042], 00:23:00.844 | 99.99th=[13042] 00:23:00.844 bw ( KiB/s): min=35328, max=37632, per=33.38%, avg=36266.67, stdev=839.35, samples=9 00:23:00.844 iops : min= 276, max= 294, avg=283.33, stdev= 6.56, samples=9 00:23:00.844 lat (msec) : 10=0.21%, 20=99.79% 00:23:00.844 cpu : usr=91.38%, sys=8.12%, ctx=6, majf=0, minf=0 00:23:00.844 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:00.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.844 issued rwts: total=1416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.844 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:00.844 00:23:00.844 Run status group 0 (all jobs): 00:23:00.844 READ: bw=106MiB/s (111MB/s), 35.4MiB/s-35.4MiB/s (37.1MB/s-37.1MB/s), io=531MiB (557MB), run=5003-5008msec 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:00.844 bdev_null0 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:00.844 [2024-11-19 02:04:11.410700] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:00.844 bdev_null1 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:00.844 bdev_null2 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.844 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:01.104 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.104 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:23:01.104 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.104 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:01.104 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.104 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:23:01.104 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.104 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:01.104 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.104 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:23:01.104 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:23:01.104 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:23:01.104 02:04:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:23:01.104 02:04:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:23:01.104 02:04:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.104 02:04:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.104 { 00:23:01.104 "params": { 00:23:01.104 "name": "Nvme$subsystem", 00:23:01.104 "trtype": "$TEST_TRANSPORT", 00:23:01.104 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.104 "adrfam": "ipv4", 00:23:01.104 "trsvcid": "$NVMF_PORT", 00:23:01.104 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.104 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.104 "hdgst": ${hdgst:-false}, 00:23:01.104 "ddgst": ${ddgst:-false} 00:23:01.104 }, 00:23:01.104 "method": "bdev_nvme_attach_controller" 00:23:01.104 } 00:23:01.104 EOF 00:23:01.104 )") 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.105 { 00:23:01.105 "params": { 00:23:01.105 "name": "Nvme$subsystem", 00:23:01.105 "trtype": "$TEST_TRANSPORT", 00:23:01.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.105 "adrfam": "ipv4", 00:23:01.105 "trsvcid": "$NVMF_PORT", 00:23:01.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.105 "hdgst": ${hdgst:-false}, 00:23:01.105 "ddgst": ${ddgst:-false} 00:23:01.105 }, 00:23:01.105 "method": "bdev_nvme_attach_controller" 00:23:01.105 } 00:23:01.105 EOF 00:23:01.105 )") 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:01.105 { 00:23:01.105 "params": { 00:23:01.105 "name": "Nvme$subsystem", 00:23:01.105 "trtype": "$TEST_TRANSPORT", 00:23:01.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:01.105 "adrfam": "ipv4", 00:23:01.105 "trsvcid": "$NVMF_PORT", 00:23:01.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:01.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:01.105 "hdgst": ${hdgst:-false}, 00:23:01.105 "ddgst": ${ddgst:-false} 00:23:01.105 }, 00:23:01.105 "method": "bdev_nvme_attach_controller" 00:23:01.105 } 00:23:01.105 EOF 00:23:01.105 )") 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:01.105 "params": { 00:23:01.105 "name": "Nvme0", 00:23:01.105 "trtype": "tcp", 00:23:01.105 "traddr": "10.0.0.3", 00:23:01.105 "adrfam": "ipv4", 00:23:01.105 "trsvcid": "4420", 00:23:01.105 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:01.105 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:01.105 "hdgst": false, 00:23:01.105 "ddgst": false 00:23:01.105 }, 00:23:01.105 "method": "bdev_nvme_attach_controller" 00:23:01.105 },{ 00:23:01.105 "params": { 00:23:01.105 "name": "Nvme1", 00:23:01.105 "trtype": "tcp", 00:23:01.105 "traddr": "10.0.0.3", 00:23:01.105 "adrfam": "ipv4", 00:23:01.105 "trsvcid": "4420", 00:23:01.105 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.105 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:01.105 "hdgst": false, 00:23:01.105 "ddgst": false 00:23:01.105 }, 00:23:01.105 "method": "bdev_nvme_attach_controller" 00:23:01.105 },{ 00:23:01.105 "params": { 00:23:01.105 "name": "Nvme2", 00:23:01.105 "trtype": "tcp", 00:23:01.105 "traddr": "10.0.0.3", 00:23:01.105 "adrfam": "ipv4", 00:23:01.105 "trsvcid": "4420", 00:23:01.105 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:01.105 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:01.105 "hdgst": false, 00:23:01.105 "ddgst": false 00:23:01.105 }, 00:23:01.105 "method": "bdev_nvme_attach_controller" 00:23:01.105 }' 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:01.105 02:04:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:01.105 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:01.105 ... 00:23:01.105 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:01.105 ... 00:23:01.105 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:01.105 ... 00:23:01.105 fio-3.35 00:23:01.105 Starting 24 threads 00:23:13.371 00:23:13.371 filename0: (groupid=0, jobs=1): err= 0: pid=97907: Tue Nov 19 02:04:22 2024 00:23:13.371 read: IOPS=250, BW=1001KiB/s (1025kB/s)(9.81MiB/10035msec) 00:23:13.371 slat (usec): min=5, max=12025, avg=28.98, stdev=366.08 00:23:13.371 clat (msec): min=20, max=121, avg=63.73, stdev=20.32 00:23:13.371 lat (msec): min=20, max=121, avg=63.76, stdev=20.32 00:23:13.371 clat percentiles (msec): 00:23:13.371 | 1.00th=[ 27], 5.00th=[ 32], 10.00th=[ 32], 20.00th=[ 48], 00:23:13.371 | 30.00th=[ 49], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 72], 00:23:13.371 | 70.00th=[ 73], 80.00th=[ 82], 90.00th=[ 85], 95.00th=[ 96], 00:23:13.371 | 99.00th=[ 109], 99.50th=[ 110], 99.90th=[ 121], 99.95th=[ 123], 00:23:13.371 | 99.99th=[ 123] 00:23:13.371 bw ( KiB/s): min= 766, max= 1792, per=4.07%, avg=998.30, stdev=280.99, samples=20 00:23:13.371 iops : min= 191, max= 448, avg=249.55, stdev=70.27, samples=20 00:23:13.371 lat (msec) : 50=33.08%, 100=63.93%, 250=2.99% 00:23:13.371 cpu : usr=30.50%, sys=1.79%, ctx=840, majf=0, minf=9 00:23:13.371 IO depths : 1=0.1%, 2=1.7%, 4=6.5%, 8=76.4%, 16=15.4%, 32=0.0%, >=64=0.0% 00:23:13.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.371 complete : 0=0.0%, 4=89.1%, 8=9.5%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.372 issued rwts: total=2512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.372 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.372 filename0: (groupid=0, jobs=1): err= 0: pid=97908: Tue Nov 19 02:04:22 2024 00:23:13.372 read: IOPS=258, BW=1035KiB/s (1060kB/s)(10.1MiB/10035msec) 00:23:13.372 slat (usec): min=3, max=8022, avg=19.53, stdev=175.75 00:23:13.372 clat (msec): min=14, max=120, avg=61.73, stdev=19.63 00:23:13.372 lat (msec): min=14, max=120, avg=61.75, stdev=19.64 00:23:13.372 clat percentiles (msec): 00:23:13.372 | 1.00th=[ 26], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 45], 00:23:13.372 | 30.00th=[ 49], 40.00th=[ 56], 50.00th=[ 66], 60.00th=[ 72], 00:23:13.372 | 70.00th=[ 73], 80.00th=[ 80], 90.00th=[ 85], 95.00th=[ 91], 00:23:13.372 | 99.00th=[ 107], 99.50th=[ 111], 99.90th=[ 118], 99.95th=[ 118], 00:23:13.372 | 99.99th=[ 122] 00:23:13.372 bw ( KiB/s): min= 840, max= 1856, per=4.21%, avg=1032.25, stdev=279.21, samples=20 00:23:13.372 iops : min= 210, max= 464, avg=258.05, stdev=69.80, samples=20 00:23:13.372 lat (msec) : 20=0.04%, 50=34.42%, 100=63.73%, 250=1.81% 00:23:13.372 cpu : usr=35.12%, sys=2.16%, ctx=1056, majf=0, minf=9 00:23:13.372 IO depths : 1=0.1%, 2=0.4%, 4=1.7%, 8=81.8%, 16=16.1%, 32=0.0%, >=64=0.0% 00:23:13.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.372 complete : 0=0.0%, 4=87.7%, 8=11.7%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.372 issued rwts: total=2597,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.372 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.372 filename0: (groupid=0, jobs=1): err= 0: pid=97909: Tue Nov 19 02:04:22 2024 00:23:13.372 read: IOPS=260, BW=1040KiB/s (1065kB/s)(10.2MiB/10042msec) 00:23:13.372 slat (usec): min=3, max=6482, avg=21.64, stdev=205.30 00:23:13.372 clat (msec): min=16, max=121, avg=61.35, stdev=21.22 00:23:13.372 lat (msec): min=16, max=121, avg=61.37, stdev=21.22 00:23:13.372 clat percentiles (msec): 00:23:13.372 | 1.00th=[ 21], 5.00th=[ 24], 10.00th=[ 31], 20.00th=[ 43], 00:23:13.372 | 30.00th=[ 49], 40.00th=[ 56], 50.00th=[ 67], 60.00th=[ 71], 00:23:13.372 | 70.00th=[ 75], 80.00th=[ 80], 90.00th=[ 86], 95.00th=[ 93], 00:23:13.372 | 99.00th=[ 109], 99.50th=[ 116], 99.90th=[ 122], 99.95th=[ 122], 00:23:13.372 | 99.99th=[ 122] 00:23:13.372 bw ( KiB/s): min= 768, max= 2184, per=4.24%, avg=1040.80, stdev=349.75, samples=20 00:23:13.372 iops : min= 192, max= 546, avg=260.20, stdev=87.44, samples=20 00:23:13.372 lat (msec) : 20=0.96%, 50=31.58%, 100=64.89%, 250=2.57% 00:23:13.372 cpu : usr=37.80%, sys=2.18%, ctx=1467, majf=0, minf=9 00:23:13.372 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=81.5%, 16=16.4%, 32=0.0%, >=64=0.0% 00:23:13.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.372 complete : 0=0.0%, 4=87.9%, 8=11.8%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.372 issued rwts: total=2612,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.372 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.372 filename0: (groupid=0, jobs=1): err= 0: pid=97910: Tue Nov 19 02:04:22 2024 00:23:13.372 read: IOPS=260, BW=1043KiB/s (1068kB/s)(10.2MiB/10035msec) 00:23:13.372 slat (usec): min=4, max=8022, avg=20.54, stdev=191.87 00:23:13.372 clat (msec): min=15, max=131, avg=61.23, stdev=22.05 00:23:13.372 lat (msec): min=16, max=131, avg=61.25, stdev=22.06 00:23:13.372 clat percentiles (msec): 00:23:13.372 | 1.00th=[ 18], 5.00th=[ 23], 10.00th=[ 29], 20.00th=[ 41], 00:23:13.372 | 30.00th=[ 49], 40.00th=[ 56], 50.00th=[ 67], 60.00th=[ 72], 00:23:13.372 | 70.00th=[ 74], 80.00th=[ 81], 90.00th=[ 86], 95.00th=[ 95], 00:23:13.372 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 121], 99.95th=[ 121], 00:23:13.372 | 99.99th=[ 132] 00:23:13.372 bw ( KiB/s): min= 768, max= 2220, per=4.25%, avg=1042.60, stdev=383.02, samples=20 00:23:13.372 iops : min= 192, max= 555, avg=260.65, stdev=95.76, samples=20 00:23:13.372 lat (msec) : 20=3.02%, 50=29.85%, 100=64.83%, 250=2.29% 00:23:13.372 cpu : usr=39.98%, sys=2.14%, ctx=1212, majf=0, minf=9 00:23:13.372 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=81.8%, 16=16.4%, 32=0.0%, >=64=0.0% 00:23:13.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.372 complete : 0=0.0%, 4=87.8%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.372 issued rwts: total=2616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.372 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.372 filename0: (groupid=0, jobs=1): err= 0: pid=97911: Tue Nov 19 02:04:22 2024 00:23:13.372 read: IOPS=261, BW=1046KiB/s (1071kB/s)(10.2MiB/10025msec) 00:23:13.372 slat (usec): min=4, max=8027, avg=23.85, stdev=270.96 00:23:13.372 clat (msec): min=15, max=119, avg=61.05, stdev=19.61 00:23:13.372 lat (msec): min=15, max=127, avg=61.07, stdev=19.62 00:23:13.372 clat percentiles (msec): 00:23:13.372 | 1.00th=[ 31], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 47], 00:23:13.372 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 61], 60.00th=[ 72], 00:23:13.372 | 70.00th=[ 72], 80.00th=[ 79], 90.00th=[ 85], 95.00th=[ 95], 00:23:13.372 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 117], 99.95th=[ 117], 00:23:13.372 | 99.99th=[ 120] 00:23:13.372 bw ( KiB/s): min= 848, max= 1856, per=4.25%, avg=1042.20, stdev=267.40, samples=20 00:23:13.372 iops : min= 212, max= 464, avg=260.50, stdev=66.87, samples=20 00:23:13.372 lat (msec) : 20=0.88%, 50=37.95%, 100=58.85%, 250=2.33% 00:23:13.372 cpu : usr=30.65%, sys=1.72%, ctx=840, majf=0, minf=9 00:23:13.372 IO depths : 1=0.1%, 2=0.2%, 4=1.2%, 8=82.6%, 16=16.0%, 32=0.0%, >=64=0.0% 00:23:13.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.372 complete : 0=0.0%, 4=87.4%, 8=12.1%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.372 issued rwts: total=2622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.372 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.372 filename0: (groupid=0, jobs=1): err= 0: pid=97912: Tue Nov 19 02:04:22 2024 00:23:13.372 read: IOPS=251, BW=1006KiB/s (1030kB/s)(9.87MiB/10047msec) 00:23:13.372 slat (usec): min=3, max=8026, avg=22.04, stdev=276.02 00:23:13.372 clat (usec): min=1504, max=132045, avg=63489.06, stdev=26232.86 00:23:13.372 lat (usec): min=1512, max=132053, avg=63511.10, stdev=26243.76 00:23:13.372 clat percentiles (usec): 00:23:13.372 | 1.00th=[ 1598], 5.00th=[ 7177], 10.00th=[ 30016], 20.00th=[ 32900], 00:23:13.372 | 30.00th=[ 50070], 40.00th=[ 64226], 50.00th=[ 71828], 60.00th=[ 72877], 00:23:13.372 | 70.00th=[ 79168], 80.00th=[ 84411], 90.00th=[ 94897], 95.00th=[ 99091], 00:23:13.372 | 99.00th=[111674], 99.50th=[111674], 99.90th=[125305], 99.95th=[125305], 00:23:13.372 | 99.99th=[131597] 00:23:13.372 bw ( KiB/s): min= 672, max= 3072, per=4.09%, avg=1004.00, stdev=534.97, samples=20 00:23:13.372 iops : min= 168, max= 768, avg=251.00, stdev=133.74, samples=20 00:23:13.372 lat (msec) : 2=2.97%, 4=0.83%, 10=1.27%, 20=1.19%, 50=23.95% 00:23:13.372 lat (msec) : 100=65.04%, 250=4.75% 00:23:13.372 cpu : usr=42.72%, sys=2.63%, ctx=997, majf=0, minf=0 00:23:13.372 IO depths : 1=0.3%, 2=3.0%, 4=11.4%, 8=70.5%, 16=14.8%, 32=0.0%, >=64=0.0% 00:23:13.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.372 complete : 0=0.0%, 4=90.8%, 8=6.7%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.372 issued rwts: total=2526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.372 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.372 filename0: (groupid=0, jobs=1): err= 0: pid=97913: Tue Nov 19 02:04:22 2024 00:23:13.372 read: IOPS=262, BW=1051KiB/s (1076kB/s)(10.3MiB/10037msec) 00:23:13.372 slat (usec): min=4, max=10025, avg=25.09, stdev=253.05 00:23:13.372 clat (msec): min=18, max=117, avg=60.75, stdev=18.54 00:23:13.372 lat (msec): min=18, max=117, avg=60.77, stdev=18.55 00:23:13.372 clat percentiles (msec): 00:23:13.372 | 1.00th=[ 24], 5.00th=[ 31], 10.00th=[ 36], 20.00th=[ 45], 00:23:13.372 | 30.00th=[ 49], 40.00th=[ 53], 50.00th=[ 61], 60.00th=[ 69], 00:23:13.372 | 70.00th=[ 73], 80.00th=[ 78], 90.00th=[ 84], 95.00th=[ 90], 00:23:13.372 | 99.00th=[ 106], 99.50th=[ 110], 99.90th=[ 117], 99.95th=[ 117], 00:23:13.372 | 99.99th=[ 117] 00:23:13.372 bw ( KiB/s): min= 872, max= 1792, per=4.28%, avg=1048.25, stdev=235.99, samples=20 00:23:13.372 iops : min= 218, max= 448, avg=262.05, stdev=59.00, samples=20 00:23:13.372 lat (msec) : 20=0.15%, 50=33.56%, 100=64.28%, 250=2.01% 00:23:13.372 cpu : usr=39.80%, sys=2.27%, ctx=1742, majf=0, minf=9 00:23:13.372 IO depths : 1=0.1%, 2=0.9%, 4=3.3%, 8=80.4%, 16=15.4%, 32=0.0%, >=64=0.0% 00:23:13.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.372 complete : 0=0.0%, 4=87.7%, 8=11.6%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.372 issued rwts: total=2637,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.372 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.372 filename0: (groupid=0, jobs=1): err= 0: pid=97914: Tue Nov 19 02:04:22 2024 00:23:13.372 read: IOPS=235, BW=941KiB/s (964kB/s)(9448KiB/10039msec) 00:23:13.372 slat (usec): min=3, max=12024, avg=28.73, stdev=394.97 00:23:13.372 clat (msec): min=8, max=132, avg=67.74, stdev=22.61 00:23:13.372 lat (msec): min=8, max=132, avg=67.77, stdev=22.61 00:23:13.372 clat percentiles (msec): 00:23:13.373 | 1.00th=[ 18], 5.00th=[ 32], 10.00th=[ 32], 20.00th=[ 48], 00:23:13.373 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 74], 00:23:13.373 | 70.00th=[ 83], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 102], 00:23:13.373 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 132], 00:23:13.373 | 99.99th=[ 132] 00:23:13.373 bw ( KiB/s): min= 768, max= 1939, per=3.84%, avg=941.35, stdev=322.24, samples=20 00:23:13.373 iops : min= 192, max= 484, avg=235.30, stdev=80.44, samples=20 00:23:13.373 lat (msec) : 10=0.59%, 20=1.44%, 50=23.84%, 100=68.92%, 250=5.21% 00:23:13.373 cpu : usr=30.67%, sys=1.84%, ctx=864, majf=0, minf=9 00:23:13.373 IO depths : 1=0.1%, 2=3.1%, 4=12.7%, 8=69.3%, 16=14.9%, 32=0.0%, >=64=0.0% 00:23:13.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.373 complete : 0=0.0%, 4=91.2%, 8=6.0%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.373 issued rwts: total=2362,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.373 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.373 filename1: (groupid=0, jobs=1): err= 0: pid=97915: Tue Nov 19 02:04:22 2024 00:23:13.373 read: IOPS=240, BW=961KiB/s (984kB/s)(9624KiB/10013msec) 00:23:13.373 slat (usec): min=4, max=12029, avg=24.65, stdev=305.45 00:23:13.373 clat (msec): min=15, max=131, avg=66.41, stdev=20.71 00:23:13.373 lat (msec): min=15, max=131, avg=66.43, stdev=20.71 00:23:13.373 clat percentiles (msec): 00:23:13.373 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 35], 20.00th=[ 48], 00:23:13.373 | 30.00th=[ 53], 40.00th=[ 62], 50.00th=[ 71], 60.00th=[ 73], 00:23:13.373 | 70.00th=[ 80], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 100], 00:23:13.373 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 123], 99.95th=[ 132], 00:23:13.373 | 99.99th=[ 132] 00:23:13.373 bw ( KiB/s): min= 656, max= 1648, per=3.87%, avg=949.42, stdev=234.02, samples=19 00:23:13.373 iops : min= 164, max= 412, avg=237.32, stdev=58.42, samples=19 00:23:13.373 lat (msec) : 20=0.42%, 50=27.64%, 100=67.12%, 250=4.82% 00:23:13.373 cpu : usr=38.62%, sys=2.12%, ctx=1157, majf=0, minf=9 00:23:13.373 IO depths : 1=0.1%, 2=2.7%, 4=11.1%, 8=71.7%, 16=14.3%, 32=0.0%, >=64=0.0% 00:23:13.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.373 complete : 0=0.0%, 4=90.1%, 8=7.4%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.373 issued rwts: total=2406,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.373 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.373 filename1: (groupid=0, jobs=1): err= 0: pid=97916: Tue Nov 19 02:04:22 2024 00:23:13.373 read: IOPS=253, BW=1014KiB/s (1038kB/s)(9.94MiB/10039msec) 00:23:13.373 slat (usec): min=3, max=12025, avg=30.16, stdev=360.70 00:23:13.373 clat (msec): min=16, max=119, avg=62.99, stdev=20.27 00:23:13.373 lat (msec): min=16, max=119, avg=63.02, stdev=20.27 00:23:13.373 clat percentiles (msec): 00:23:13.373 | 1.00th=[ 23], 5.00th=[ 32], 10.00th=[ 32], 20.00th=[ 47], 00:23:13.373 | 30.00th=[ 49], 40.00th=[ 58], 50.00th=[ 70], 60.00th=[ 72], 00:23:13.373 | 70.00th=[ 74], 80.00th=[ 81], 90.00th=[ 85], 95.00th=[ 95], 00:23:13.373 | 99.00th=[ 109], 99.50th=[ 117], 99.90th=[ 121], 99.95th=[ 121], 00:23:13.373 | 99.99th=[ 121] 00:23:13.373 bw ( KiB/s): min= 760, max= 1880, per=4.12%, avg=1010.80, stdev=291.06, samples=20 00:23:13.373 iops : min= 190, max= 470, avg=252.70, stdev=72.77, samples=20 00:23:13.373 lat (msec) : 20=0.79%, 50=32.47%, 100=64.39%, 250=2.36% 00:23:13.373 cpu : usr=35.31%, sys=1.95%, ctx=1010, majf=0, minf=9 00:23:13.373 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=81.8%, 16=16.7%, 32=0.0%, >=64=0.0% 00:23:13.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.373 complete : 0=0.0%, 4=88.0%, 8=11.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.373 issued rwts: total=2544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.373 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.373 filename1: (groupid=0, jobs=1): err= 0: pid=97917: Tue Nov 19 02:04:22 2024 00:23:13.373 read: IOPS=255, BW=1021KiB/s (1046kB/s)(9.98MiB/10005msec) 00:23:13.373 slat (usec): min=4, max=4036, avg=23.98, stdev=194.35 00:23:13.373 clat (msec): min=11, max=119, avg=62.53, stdev=20.35 00:23:13.373 lat (msec): min=11, max=119, avg=62.55, stdev=20.35 00:23:13.373 clat percentiles (msec): 00:23:13.373 | 1.00th=[ 27], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 47], 00:23:13.373 | 30.00th=[ 50], 40.00th=[ 55], 50.00th=[ 66], 60.00th=[ 72], 00:23:13.373 | 70.00th=[ 74], 80.00th=[ 80], 90.00th=[ 87], 95.00th=[ 95], 00:23:13.373 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 121], 00:23:13.373 | 99.99th=[ 121] 00:23:13.373 bw ( KiB/s): min= 752, max= 1792, per=4.13%, avg=1011.37, stdev=266.58, samples=19 00:23:13.373 iops : min= 188, max= 448, avg=252.84, stdev=66.64, samples=19 00:23:13.373 lat (msec) : 20=0.12%, 50=32.60%, 100=63.84%, 250=3.44% 00:23:13.373 cpu : usr=43.09%, sys=2.46%, ctx=1282, majf=0, minf=10 00:23:13.373 IO depths : 1=0.1%, 2=2.0%, 4=7.9%, 8=75.4%, 16=14.6%, 32=0.0%, >=64=0.0% 00:23:13.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.373 complete : 0=0.0%, 4=89.0%, 8=9.3%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.373 issued rwts: total=2555,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.373 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.373 filename1: (groupid=0, jobs=1): err= 0: pid=97918: Tue Nov 19 02:04:22 2024 00:23:13.373 read: IOPS=250, BW=1001KiB/s (1025kB/s)(9.82MiB/10041msec) 00:23:13.373 slat (usec): min=4, max=8025, avg=17.48, stdev=173.69 00:23:13.373 clat (msec): min=8, max=133, avg=63.77, stdev=20.51 00:23:13.373 lat (msec): min=8, max=133, avg=63.79, stdev=20.51 00:23:13.373 clat percentiles (msec): 00:23:13.373 | 1.00th=[ 16], 5.00th=[ 31], 10.00th=[ 33], 20.00th=[ 46], 00:23:13.373 | 30.00th=[ 51], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 72], 00:23:13.373 | 70.00th=[ 75], 80.00th=[ 83], 90.00th=[ 86], 95.00th=[ 95], 00:23:13.373 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 120], 99.95th=[ 121], 00:23:13.373 | 99.99th=[ 134] 00:23:13.373 bw ( KiB/s): min= 808, max= 2031, per=4.07%, avg=997.95, stdev=305.27, samples=20 00:23:13.373 iops : min= 202, max= 507, avg=249.45, stdev=76.18, samples=20 00:23:13.373 lat (msec) : 10=0.64%, 20=1.31%, 50=26.74%, 100=69.16%, 250=2.15% 00:23:13.373 cpu : usr=34.95%, sys=2.21%, ctx=1172, majf=0, minf=9 00:23:13.373 IO depths : 1=0.2%, 2=1.0%, 4=3.6%, 8=79.0%, 16=16.3%, 32=0.0%, >=64=0.0% 00:23:13.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.373 complete : 0=0.0%, 4=88.7%, 8=10.5%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.373 issued rwts: total=2513,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.373 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.373 filename1: (groupid=0, jobs=1): err= 0: pid=97919: Tue Nov 19 02:04:22 2024 00:23:13.373 read: IOPS=244, BW=980KiB/s (1003kB/s)(9820KiB/10024msec) 00:23:13.373 slat (usec): min=3, max=12029, avg=21.82, stdev=291.55 00:23:13.373 clat (msec): min=19, max=130, avg=65.14, stdev=19.44 00:23:13.373 lat (msec): min=19, max=130, avg=65.16, stdev=19.43 00:23:13.373 clat percentiles (msec): 00:23:13.373 | 1.00th=[ 29], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 48], 00:23:13.373 | 30.00th=[ 50], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 72], 00:23:13.373 | 70.00th=[ 77], 80.00th=[ 82], 90.00th=[ 89], 95.00th=[ 96], 00:23:13.373 | 99.00th=[ 108], 99.50th=[ 108], 99.90th=[ 121], 99.95th=[ 124], 00:23:13.373 | 99.99th=[ 131] 00:23:13.373 bw ( KiB/s): min= 768, max= 1536, per=3.99%, avg=977.40, stdev=201.64, samples=20 00:23:13.373 iops : min= 192, max= 384, avg=244.30, stdev=50.35, samples=20 00:23:13.373 lat (msec) : 20=0.08%, 50=31.04%, 100=66.23%, 250=2.65% 00:23:13.373 cpu : usr=35.54%, sys=2.19%, ctx=1110, majf=0, minf=9 00:23:13.373 IO depths : 1=0.1%, 2=1.6%, 4=6.6%, 8=76.4%, 16=15.4%, 32=0.0%, >=64=0.0% 00:23:13.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.373 complete : 0=0.0%, 4=89.0%, 8=9.5%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.373 issued rwts: total=2455,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.373 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.373 filename1: (groupid=0, jobs=1): err= 0: pid=97920: Tue Nov 19 02:04:22 2024 00:23:13.373 read: IOPS=259, BW=1036KiB/s (1061kB/s)(10.2MiB/10030msec) 00:23:13.373 slat (usec): min=3, max=11030, avg=27.24, stdev=295.93 00:23:13.373 clat (msec): min=23, max=118, avg=61.58, stdev=17.92 00:23:13.373 lat (msec): min=23, max=118, avg=61.61, stdev=17.92 00:23:13.373 clat percentiles (msec): 00:23:13.373 | 1.00th=[ 31], 5.00th=[ 34], 10.00th=[ 37], 20.00th=[ 47], 00:23:13.373 | 30.00th=[ 49], 40.00th=[ 54], 50.00th=[ 61], 60.00th=[ 70], 00:23:13.373 | 70.00th=[ 72], 80.00th=[ 78], 90.00th=[ 84], 95.00th=[ 91], 00:23:13.373 | 99.00th=[ 106], 99.50th=[ 109], 99.90th=[ 120], 99.95th=[ 120], 00:23:13.373 | 99.99th=[ 120] 00:23:13.373 bw ( KiB/s): min= 872, max= 1664, per=4.22%, avg=1033.05, stdev=199.10, samples=20 00:23:13.373 iops : min= 218, max= 416, avg=258.25, stdev=49.78, samples=20 00:23:13.373 lat (msec) : 50=33.90%, 100=64.41%, 250=1.69% 00:23:13.373 cpu : usr=38.94%, sys=1.97%, ctx=1162, majf=0, minf=9 00:23:13.374 IO depths : 1=0.1%, 2=1.0%, 4=3.8%, 8=79.8%, 16=15.3%, 32=0.0%, >=64=0.0% 00:23:13.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.374 complete : 0=0.0%, 4=87.9%, 8=11.3%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.374 issued rwts: total=2599,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.374 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.374 filename1: (groupid=0, jobs=1): err= 0: pid=97921: Tue Nov 19 02:04:22 2024 00:23:13.374 read: IOPS=266, BW=1066KiB/s (1091kB/s)(10.4MiB/10003msec) 00:23:13.374 slat (usec): min=4, max=12023, avg=27.63, stdev=343.39 00:23:13.374 clat (msec): min=4, max=116, avg=59.92, stdev=19.46 00:23:13.374 lat (msec): min=4, max=116, avg=59.95, stdev=19.46 00:23:13.374 clat percentiles (msec): 00:23:13.374 | 1.00th=[ 17], 5.00th=[ 32], 10.00th=[ 32], 20.00th=[ 46], 00:23:13.374 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 61], 60.00th=[ 70], 00:23:13.374 | 70.00th=[ 72], 80.00th=[ 77], 90.00th=[ 84], 95.00th=[ 90], 00:23:13.374 | 99.00th=[ 105], 99.50th=[ 110], 99.90th=[ 116], 99.95th=[ 116], 00:23:13.374 | 99.99th=[ 116] 00:23:13.374 bw ( KiB/s): min= 920, max= 1664, per=4.28%, avg=1050.32, stdev=218.63, samples=19 00:23:13.374 iops : min= 230, max= 416, avg=262.58, stdev=54.66, samples=19 00:23:13.374 lat (msec) : 10=0.75%, 20=0.56%, 50=36.59%, 100=60.60%, 250=1.50% 00:23:13.374 cpu : usr=35.81%, sys=2.04%, ctx=1106, majf=0, minf=9 00:23:13.374 IO depths : 1=0.1%, 2=0.9%, 4=3.4%, 8=80.3%, 16=15.3%, 32=0.0%, >=64=0.0% 00:23:13.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.374 complete : 0=0.0%, 4=87.7%, 8=11.6%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.374 issued rwts: total=2665,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.374 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.374 filename1: (groupid=0, jobs=1): err= 0: pid=97922: Tue Nov 19 02:04:22 2024 00:23:13.374 read: IOPS=257, BW=1030KiB/s (1055kB/s)(10.1MiB/10002msec) 00:23:13.374 slat (usec): min=4, max=8032, avg=22.24, stdev=217.89 00:23:13.374 clat (usec): min=1257, max=127958, avg=62016.15, stdev=21146.55 00:23:13.374 lat (usec): min=1274, max=127981, avg=62038.39, stdev=21140.90 00:23:13.374 clat percentiles (msec): 00:23:13.374 | 1.00th=[ 5], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 46], 00:23:13.374 | 30.00th=[ 49], 40.00th=[ 55], 50.00th=[ 64], 60.00th=[ 71], 00:23:13.374 | 70.00th=[ 74], 80.00th=[ 81], 90.00th=[ 88], 95.00th=[ 96], 00:23:13.374 | 99.00th=[ 109], 99.50th=[ 109], 99.90th=[ 120], 99.95th=[ 129], 00:23:13.374 | 99.99th=[ 129] 00:23:13.374 bw ( KiB/s): min= 752, max= 1680, per=4.10%, avg=1005.79, stdev=227.17, samples=19 00:23:13.374 iops : min= 188, max= 420, avg=251.42, stdev=56.73, samples=19 00:23:13.374 lat (msec) : 2=0.04%, 4=0.70%, 10=0.97%, 20=0.62%, 50=31.09% 00:23:13.374 lat (msec) : 100=63.24%, 250=3.34% 00:23:13.374 cpu : usr=38.35%, sys=2.38%, ctx=1309, majf=0, minf=9 00:23:13.374 IO depths : 1=0.1%, 2=1.5%, 4=5.9%, 8=77.5%, 16=15.1%, 32=0.0%, >=64=0.0% 00:23:13.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.374 complete : 0=0.0%, 4=88.5%, 8=10.2%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.374 issued rwts: total=2576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.374 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.374 filename2: (groupid=0, jobs=1): err= 0: pid=97923: Tue Nov 19 02:04:22 2024 00:23:13.374 read: IOPS=251, BW=1006KiB/s (1031kB/s)(9.87MiB/10043msec) 00:23:13.374 slat (usec): min=4, max=8028, avg=33.15, stdev=390.05 00:23:13.374 clat (msec): min=5, max=131, avg=63.32, stdev=20.04 00:23:13.374 lat (msec): min=5, max=131, avg=63.35, stdev=20.04 00:23:13.374 clat percentiles (msec): 00:23:13.374 | 1.00th=[ 15], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 48], 00:23:13.374 | 30.00th=[ 49], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 72], 00:23:13.374 | 70.00th=[ 73], 80.00th=[ 82], 90.00th=[ 85], 95.00th=[ 96], 00:23:13.374 | 99.00th=[ 108], 99.50th=[ 108], 99.90th=[ 121], 99.95th=[ 130], 00:23:13.374 | 99.99th=[ 132] 00:23:13.374 bw ( KiB/s): min= 784, max= 2007, per=4.10%, avg=1006.75, stdev=281.68, samples=20 00:23:13.374 iops : min= 196, max= 501, avg=251.65, stdev=70.28, samples=20 00:23:13.374 lat (msec) : 10=0.63%, 20=1.35%, 50=30.87%, 100=64.74%, 250=2.41% 00:23:13.374 cpu : usr=30.59%, sys=1.80%, ctx=843, majf=0, minf=9 00:23:13.374 IO depths : 1=0.2%, 2=1.1%, 4=4.2%, 8=78.6%, 16=15.9%, 32=0.0%, >=64=0.0% 00:23:13.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.374 complete : 0=0.0%, 4=88.6%, 8=10.5%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.374 issued rwts: total=2527,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.374 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.374 filename2: (groupid=0, jobs=1): err= 0: pid=97924: Tue Nov 19 02:04:22 2024 00:23:13.374 read: IOPS=265, BW=1062KiB/s (1088kB/s)(10.4MiB/10012msec) 00:23:13.374 slat (usec): min=3, max=4026, avg=20.02, stdev=140.09 00:23:13.374 clat (msec): min=13, max=119, avg=60.14, stdev=19.70 00:23:13.374 lat (msec): min=13, max=119, avg=60.16, stdev=19.70 00:23:13.374 clat percentiles (msec): 00:23:13.374 | 1.00th=[ 21], 5.00th=[ 29], 10.00th=[ 34], 20.00th=[ 45], 00:23:13.374 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 59], 60.00th=[ 69], 00:23:13.374 | 70.00th=[ 72], 80.00th=[ 79], 90.00th=[ 84], 95.00th=[ 90], 00:23:13.374 | 99.00th=[ 108], 99.50th=[ 111], 99.90th=[ 121], 99.95th=[ 121], 00:23:13.374 | 99.99th=[ 121] 00:23:13.374 bw ( KiB/s): min= 816, max= 1821, per=4.30%, avg=1055.58, stdev=265.40, samples=19 00:23:13.374 iops : min= 204, max= 455, avg=263.84, stdev=66.20, samples=19 00:23:13.374 lat (msec) : 20=0.90%, 50=35.16%, 100=61.90%, 250=2.03% 00:23:13.374 cpu : usr=39.25%, sys=2.13%, ctx=1299, majf=0, minf=9 00:23:13.374 IO depths : 1=0.1%, 2=0.4%, 4=1.3%, 8=82.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:23:13.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.374 complete : 0=0.0%, 4=87.2%, 8=12.5%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.374 issued rwts: total=2659,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.374 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.374 filename2: (groupid=0, jobs=1): err= 0: pid=97925: Tue Nov 19 02:04:22 2024 00:23:13.374 read: IOPS=269, BW=1079KiB/s (1105kB/s)(10.6MiB/10029msec) 00:23:13.374 slat (usec): min=3, max=4025, avg=17.18, stdev=77.27 00:23:13.374 clat (msec): min=15, max=121, avg=59.21, stdev=20.82 00:23:13.374 lat (msec): min=15, max=121, avg=59.23, stdev=20.82 00:23:13.374 clat percentiles (msec): 00:23:13.374 | 1.00th=[ 19], 5.00th=[ 23], 10.00th=[ 31], 20.00th=[ 41], 00:23:13.374 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 61], 60.00th=[ 70], 00:23:13.374 | 70.00th=[ 72], 80.00th=[ 78], 90.00th=[ 85], 95.00th=[ 91], 00:23:13.374 | 99.00th=[ 108], 99.50th=[ 109], 99.90th=[ 123], 99.95th=[ 123], 00:23:13.374 | 99.99th=[ 123] 00:23:13.374 bw ( KiB/s): min= 840, max= 2132, per=4.39%, avg=1075.65, stdev=351.76, samples=20 00:23:13.374 iops : min= 210, max= 533, avg=268.90, stdev=87.94, samples=20 00:23:13.374 lat (msec) : 20=1.66%, 50=36.08%, 100=60.96%, 250=1.29% 00:23:13.374 cpu : usr=41.07%, sys=2.50%, ctx=1101, majf=0, minf=9 00:23:13.374 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=83.0%, 16=15.9%, 32=0.0%, >=64=0.0% 00:23:13.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.374 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.374 issued rwts: total=2705,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.374 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.374 filename2: (groupid=0, jobs=1): err= 0: pid=97926: Tue Nov 19 02:04:22 2024 00:23:13.374 read: IOPS=248, BW=993KiB/s (1017kB/s)(9932KiB/10001msec) 00:23:13.374 slat (nsec): min=3770, max=46222, avg=14321.65, stdev=4725.71 00:23:13.374 clat (usec): min=1095, max=121061, avg=64368.02, stdev=22635.75 00:23:13.374 lat (usec): min=1103, max=121076, avg=64382.34, stdev=22635.59 00:23:13.374 clat percentiles (msec): 00:23:13.374 | 1.00th=[ 3], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 48], 00:23:13.374 | 30.00th=[ 50], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 72], 00:23:13.374 | 70.00th=[ 77], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 96], 00:23:13.374 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 122], 00:23:13.374 | 99.99th=[ 122] 00:23:13.374 bw ( KiB/s): min= 656, max= 1667, per=3.90%, avg=957.11, stdev=249.24, samples=19 00:23:13.374 iops : min= 164, max= 416, avg=239.21, stdev=62.13, samples=19 00:23:13.374 lat (msec) : 2=0.64%, 4=0.60%, 10=1.05%, 20=0.72%, 50=29.72% 00:23:13.374 lat (msec) : 100=63.87%, 250=3.38% 00:23:13.374 cpu : usr=30.83%, sys=1.67%, ctx=844, majf=0, minf=9 00:23:13.374 IO depths : 1=0.2%, 2=2.5%, 4=9.4%, 8=73.5%, 16=14.5%, 32=0.0%, >=64=0.0% 00:23:13.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.374 complete : 0=0.0%, 4=89.6%, 8=8.3%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.374 issued rwts: total=2483,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.374 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.375 filename2: (groupid=0, jobs=1): err= 0: pid=97927: Tue Nov 19 02:04:22 2024 00:23:13.375 read: IOPS=257, BW=1029KiB/s (1054kB/s)(10.1MiB/10021msec) 00:23:13.375 slat (usec): min=4, max=12026, avg=28.89, stdev=361.22 00:23:13.375 clat (msec): min=19, max=119, avg=61.98, stdev=18.75 00:23:13.375 lat (msec): min=19, max=119, avg=62.01, stdev=18.75 00:23:13.375 clat percentiles (msec): 00:23:13.375 | 1.00th=[ 31], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 48], 00:23:13.375 | 30.00th=[ 48], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 72], 00:23:13.375 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 85], 95.00th=[ 94], 00:23:13.375 | 99.00th=[ 107], 99.50th=[ 108], 99.90th=[ 121], 99.95th=[ 121], 00:23:13.375 | 99.99th=[ 121] 00:23:13.375 bw ( KiB/s): min= 896, max= 1664, per=4.19%, avg=1027.05, stdev=200.84, samples=20 00:23:13.375 iops : min= 224, max= 416, avg=256.75, stdev=50.21, samples=20 00:23:13.375 lat (msec) : 20=0.08%, 50=36.99%, 100=61.23%, 250=1.71% 00:23:13.375 cpu : usr=30.59%, sys=1.71%, ctx=836, majf=0, minf=9 00:23:13.375 IO depths : 1=0.1%, 2=1.0%, 4=4.0%, 8=79.5%, 16=15.4%, 32=0.0%, >=64=0.0% 00:23:13.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.375 complete : 0=0.0%, 4=88.0%, 8=11.1%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.375 issued rwts: total=2579,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.375 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.375 filename2: (groupid=0, jobs=1): err= 0: pid=97928: Tue Nov 19 02:04:22 2024 00:23:13.375 read: IOPS=256, BW=1024KiB/s (1049kB/s)(10.0MiB/10024msec) 00:23:13.375 slat (usec): min=4, max=1034, avg=15.08, stdev=20.70 00:23:13.375 clat (msec): min=27, max=119, avg=62.35, stdev=19.83 00:23:13.375 lat (msec): min=27, max=119, avg=62.36, stdev=19.83 00:23:13.375 clat percentiles (msec): 00:23:13.375 | 1.00th=[ 31], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 47], 00:23:13.375 | 30.00th=[ 49], 40.00th=[ 54], 50.00th=[ 63], 60.00th=[ 72], 00:23:13.375 | 70.00th=[ 74], 80.00th=[ 80], 90.00th=[ 85], 95.00th=[ 96], 00:23:13.375 | 99.00th=[ 112], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 121], 00:23:13.375 | 99.99th=[ 121] 00:23:13.375 bw ( KiB/s): min= 766, max= 1792, per=4.17%, avg=1023.10, stdev=253.09, samples=20 00:23:13.375 iops : min= 191, max= 448, avg=255.75, stdev=63.30, samples=20 00:23:13.375 lat (msec) : 50=34.63%, 100=61.94%, 250=3.43% 00:23:13.375 cpu : usr=37.11%, sys=2.17%, ctx=1204, majf=0, minf=9 00:23:13.375 IO depths : 1=0.2%, 2=1.6%, 4=5.9%, 8=77.4%, 16=15.0%, 32=0.0%, >=64=0.0% 00:23:13.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.375 complete : 0=0.0%, 4=88.5%, 8=10.2%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.375 issued rwts: total=2567,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.375 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.375 filename2: (groupid=0, jobs=1): err= 0: pid=97929: Tue Nov 19 02:04:22 2024 00:23:13.375 read: IOPS=268, BW=1074KiB/s (1099kB/s)(10.5MiB/10029msec) 00:23:13.375 slat (usec): min=3, max=4025, avg=16.54, stdev=77.44 00:23:13.375 clat (msec): min=14, max=117, avg=59.49, stdev=21.34 00:23:13.375 lat (msec): min=14, max=117, avg=59.51, stdev=21.34 00:23:13.375 clat percentiles (msec): 00:23:13.375 | 1.00th=[ 18], 5.00th=[ 22], 10.00th=[ 31], 20.00th=[ 41], 00:23:13.375 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 62], 60.00th=[ 71], 00:23:13.375 | 70.00th=[ 73], 80.00th=[ 79], 90.00th=[ 84], 95.00th=[ 93], 00:23:13.375 | 99.00th=[ 106], 99.50th=[ 109], 99.90th=[ 118], 99.95th=[ 118], 00:23:13.375 | 99.99th=[ 118] 00:23:13.375 bw ( KiB/s): min= 864, max= 2240, per=4.37%, avg=1070.30, stdev=371.54, samples=20 00:23:13.375 iops : min= 216, max= 560, avg=267.55, stdev=92.89, samples=20 00:23:13.375 lat (msec) : 20=2.53%, 50=35.48%, 100=59.77%, 250=2.23% 00:23:13.375 cpu : usr=43.56%, sys=2.60%, ctx=1225, majf=0, minf=9 00:23:13.375 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.2%, 16=16.0%, 32=0.0%, >=64=0.0% 00:23:13.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.375 complete : 0=0.0%, 4=87.1%, 8=12.8%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.375 issued rwts: total=2692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.375 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.375 filename2: (groupid=0, jobs=1): err= 0: pid=97930: Tue Nov 19 02:04:22 2024 00:23:13.375 read: IOPS=253, BW=1015KiB/s (1039kB/s)(9.95MiB/10045msec) 00:23:13.375 slat (usec): min=4, max=8023, avg=17.43, stdev=171.22 00:23:13.375 clat (msec): min=8, max=143, avg=62.91, stdev=20.28 00:23:13.375 lat (msec): min=8, max=143, avg=62.92, stdev=20.28 00:23:13.375 clat percentiles (msec): 00:23:13.375 | 1.00th=[ 16], 5.00th=[ 31], 10.00th=[ 34], 20.00th=[ 45], 00:23:13.375 | 30.00th=[ 51], 40.00th=[ 58], 50.00th=[ 68], 60.00th=[ 72], 00:23:13.375 | 70.00th=[ 75], 80.00th=[ 81], 90.00th=[ 85], 95.00th=[ 95], 00:23:13.375 | 99.00th=[ 108], 99.50th=[ 114], 99.90th=[ 121], 99.95th=[ 126], 00:23:13.375 | 99.99th=[ 144] 00:23:13.375 bw ( KiB/s): min= 760, max= 2015, per=4.14%, avg=1014.75, stdev=304.28, samples=20 00:23:13.375 iops : min= 190, max= 503, avg=253.65, stdev=75.94, samples=20 00:23:13.375 lat (msec) : 10=0.63%, 20=0.71%, 50=28.18%, 100=68.25%, 250=2.24% 00:23:13.375 cpu : usr=37.23%, sys=2.05%, ctx=1233, majf=0, minf=9 00:23:13.375 IO depths : 1=0.1%, 2=0.9%, 4=3.8%, 8=79.0%, 16=16.1%, 32=0.0%, >=64=0.0% 00:23:13.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.375 complete : 0=0.0%, 4=88.5%, 8=10.6%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.375 issued rwts: total=2548,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.375 latency : target=0, window=0, percentile=100.00%, depth=16 00:23:13.375 00:23:13.375 Run status group 0 (all jobs): 00:23:13.375 READ: bw=23.9MiB/s (25.1MB/s), 941KiB/s-1079KiB/s (964kB/s-1105kB/s), io=240MiB (252MB), run=10001-10047msec 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:23:13.375 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:13.376 bdev_null0 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:13.376 [2024-11-19 02:04:22.623932] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:13.376 bdev_null1 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:13.376 { 00:23:13.376 "params": { 00:23:13.376 "name": "Nvme$subsystem", 00:23:13.376 "trtype": "$TEST_TRANSPORT", 00:23:13.376 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.376 "adrfam": "ipv4", 00:23:13.376 "trsvcid": "$NVMF_PORT", 00:23:13.376 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.376 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.376 "hdgst": ${hdgst:-false}, 00:23:13.376 "ddgst": ${ddgst:-false} 00:23:13.376 }, 00:23:13.376 "method": "bdev_nvme_attach_controller" 00:23:13.376 } 00:23:13.376 EOF 00:23:13.376 )") 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:13.376 { 00:23:13.376 "params": { 00:23:13.376 "name": "Nvme$subsystem", 00:23:13.376 "trtype": "$TEST_TRANSPORT", 00:23:13.376 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.376 "adrfam": "ipv4", 00:23:13.376 "trsvcid": "$NVMF_PORT", 00:23:13.376 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.376 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.376 "hdgst": ${hdgst:-false}, 00:23:13.376 "ddgst": ${ddgst:-false} 00:23:13.376 }, 00:23:13.376 "method": "bdev_nvme_attach_controller" 00:23:13.376 } 00:23:13.376 EOF 00:23:13.376 )") 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:23:13.376 02:04:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:13.376 "params": { 00:23:13.376 "name": "Nvme0", 00:23:13.376 "trtype": "tcp", 00:23:13.376 "traddr": "10.0.0.3", 00:23:13.376 "adrfam": "ipv4", 00:23:13.376 "trsvcid": "4420", 00:23:13.376 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:13.376 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:13.376 "hdgst": false, 00:23:13.376 "ddgst": false 00:23:13.376 }, 00:23:13.376 "method": "bdev_nvme_attach_controller" 00:23:13.376 },{ 00:23:13.376 "params": { 00:23:13.376 "name": "Nvme1", 00:23:13.376 "trtype": "tcp", 00:23:13.376 "traddr": "10.0.0.3", 00:23:13.376 "adrfam": "ipv4", 00:23:13.376 "trsvcid": "4420", 00:23:13.376 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.376 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:13.376 "hdgst": false, 00:23:13.376 "ddgst": false 00:23:13.376 }, 00:23:13.376 "method": "bdev_nvme_attach_controller" 00:23:13.377 }' 00:23:13.377 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:13.377 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:23:13.377 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:13.377 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:13.377 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:13.377 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:13.377 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:13.377 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:13.377 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:13.377 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:13.377 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:13.377 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:13.377 02:04:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:13.377 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:13.377 ... 00:23:13.377 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:23:13.377 ... 00:23:13.377 fio-3.35 00:23:13.377 Starting 4 threads 00:23:18.651 00:23:18.651 filename0: (groupid=0, jobs=1): err= 0: pid=98068: Tue Nov 19 02:04:28 2024 00:23:18.651 read: IOPS=2235, BW=17.5MiB/s (18.3MB/s)(87.3MiB/5002msec) 00:23:18.651 slat (nsec): min=6831, max=65935, avg=15251.34, stdev=4674.84 00:23:18.651 clat (usec): min=1127, max=6284, avg=3538.34, stdev=812.04 00:23:18.651 lat (usec): min=1139, max=6297, avg=3553.59, stdev=812.07 00:23:18.651 clat percentiles (usec): 00:23:18.651 | 1.00th=[ 1778], 5.00th=[ 1975], 10.00th=[ 2704], 20.00th=[ 2999], 00:23:18.651 | 30.00th=[ 3097], 40.00th=[ 3163], 50.00th=[ 3458], 60.00th=[ 3621], 00:23:18.651 | 70.00th=[ 3982], 80.00th=[ 4424], 90.00th=[ 4686], 95.00th=[ 4817], 00:23:18.651 | 99.00th=[ 5211], 99.50th=[ 5342], 99.90th=[ 5538], 99.95th=[ 5604], 00:23:18.651 | 99.99th=[ 6194] 00:23:18.651 bw ( KiB/s): min=16673, max=18800, per=25.54%, avg=17991.22, stdev=663.52, samples=9 00:23:18.651 iops : min= 2084, max= 2350, avg=2248.89, stdev=82.97, samples=9 00:23:18.651 lat (msec) : 2=5.38%, 4=65.10%, 10=29.53% 00:23:18.651 cpu : usr=91.78%, sys=7.22%, ctx=12, majf=0, minf=0 00:23:18.651 IO depths : 1=0.1%, 2=4.2%, 4=65.3%, 8=30.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:18.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:18.651 complete : 0=0.0%, 4=98.4%, 8=1.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:18.651 issued rwts: total=11180,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:18.651 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:18.651 filename0: (groupid=0, jobs=1): err= 0: pid=98069: Tue Nov 19 02:04:28 2024 00:23:18.651 read: IOPS=2086, BW=16.3MiB/s (17.1MB/s)(81.5MiB/5002msec) 00:23:18.651 slat (nsec): min=6683, max=46953, avg=10473.11, stdev=4475.79 00:23:18.651 clat (usec): min=648, max=6880, avg=3800.42, stdev=827.99 00:23:18.651 lat (usec): min=656, max=6895, avg=3810.89, stdev=828.33 00:23:18.651 clat percentiles (usec): 00:23:18.651 | 1.00th=[ 1450], 5.00th=[ 2933], 10.00th=[ 3032], 20.00th=[ 3097], 00:23:18.651 | 30.00th=[ 3195], 40.00th=[ 3458], 50.00th=[ 3621], 60.00th=[ 3982], 00:23:18.651 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 4817], 95.00th=[ 4948], 00:23:18.651 | 99.00th=[ 5538], 99.50th=[ 5669], 99.90th=[ 6325], 99.95th=[ 6325], 00:23:18.651 | 99.99th=[ 6849] 00:23:18.651 bw ( KiB/s): min=13680, max=18096, per=23.36%, avg=16455.11, stdev=1688.99, samples=9 00:23:18.651 iops : min= 1710, max= 2262, avg=2056.89, stdev=211.12, samples=9 00:23:18.651 lat (usec) : 750=0.03%, 1000=0.05% 00:23:18.651 lat (msec) : 2=2.95%, 4=57.21%, 10=39.76% 00:23:18.651 cpu : usr=91.00%, sys=8.14%, ctx=23, majf=0, minf=0 00:23:18.651 IO depths : 1=0.1%, 2=9.1%, 4=62.9%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:18.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:18.651 complete : 0=0.0%, 4=96.4%, 8=3.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:18.651 issued rwts: total=10437,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:18.651 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:18.651 filename1: (groupid=0, jobs=1): err= 0: pid=98070: Tue Nov 19 02:04:28 2024 00:23:18.651 read: IOPS=2249, BW=17.6MiB/s (18.4MB/s)(87.9MiB/5002msec) 00:23:18.651 slat (usec): min=3, max=4040, avg=15.53, stdev=38.32 00:23:18.651 clat (usec): min=682, max=9617, avg=3514.62, stdev=850.13 00:23:18.651 lat (usec): min=690, max=9629, avg=3530.15, stdev=849.99 00:23:18.651 clat percentiles (usec): 00:23:18.651 | 1.00th=[ 1500], 5.00th=[ 1958], 10.00th=[ 2606], 20.00th=[ 2999], 00:23:18.651 | 30.00th=[ 3064], 40.00th=[ 3163], 50.00th=[ 3425], 60.00th=[ 3589], 00:23:18.651 | 70.00th=[ 3949], 80.00th=[ 4424], 90.00th=[ 4686], 95.00th=[ 4817], 00:23:18.651 | 99.00th=[ 5211], 99.50th=[ 5342], 99.90th=[ 6456], 99.95th=[ 9372], 00:23:18.651 | 99.99th=[ 9503] 00:23:18.651 bw ( KiB/s): min=17472, max=18800, per=25.73%, avg=18121.22, stdev=454.72, samples=9 00:23:18.651 iops : min= 2184, max= 2350, avg=2265.11, stdev=56.87, samples=9 00:23:18.651 lat (usec) : 750=0.15%, 1000=0.29% 00:23:18.651 lat (msec) : 2=5.62%, 4=65.48%, 10=28.45% 00:23:18.651 cpu : usr=91.52%, sys=7.26%, ctx=118, majf=0, minf=10 00:23:18.651 IO depths : 1=0.1%, 2=3.6%, 4=65.7%, 8=30.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:18.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:18.651 complete : 0=0.0%, 4=98.6%, 8=1.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:18.651 issued rwts: total=11253,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:18.651 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:18.651 filename1: (groupid=0, jobs=1): err= 0: pid=98071: Tue Nov 19 02:04:28 2024 00:23:18.651 read: IOPS=2233, BW=17.4MiB/s (18.3MB/s)(87.3MiB/5002msec) 00:23:18.651 slat (nsec): min=6770, max=63182, avg=14360.87, stdev=5149.51 00:23:18.651 clat (usec): min=1155, max=6302, avg=3542.99, stdev=810.91 00:23:18.651 lat (usec): min=1164, max=6315, avg=3557.35, stdev=811.51 00:23:18.651 clat percentiles (usec): 00:23:18.651 | 1.00th=[ 1778], 5.00th=[ 1991], 10.00th=[ 2704], 20.00th=[ 2999], 00:23:18.651 | 30.00th=[ 3097], 40.00th=[ 3163], 50.00th=[ 3458], 60.00th=[ 3621], 00:23:18.651 | 70.00th=[ 3982], 80.00th=[ 4424], 90.00th=[ 4686], 95.00th=[ 4817], 00:23:18.651 | 99.00th=[ 5211], 99.50th=[ 5342], 99.90th=[ 5538], 99.95th=[ 5604], 00:23:18.651 | 99.99th=[ 6194] 00:23:18.651 bw ( KiB/s): min=16640, max=18800, per=25.54%, avg=17987.56, stdev=671.75, samples=9 00:23:18.651 iops : min= 2080, max= 2350, avg=2248.44, stdev=83.97, samples=9 00:23:18.651 lat (msec) : 2=5.11%, 4=64.98%, 10=29.91% 00:23:18.651 cpu : usr=91.66%, sys=7.40%, ctx=9, majf=0, minf=0 00:23:18.651 IO depths : 1=0.1%, 2=4.2%, 4=65.3%, 8=30.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:18.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:18.651 complete : 0=0.0%, 4=98.4%, 8=1.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:18.651 issued rwts: total=11172,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:18.651 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:18.651 00:23:18.651 Run status group 0 (all jobs): 00:23:18.651 READ: bw=68.8MiB/s (72.1MB/s), 16.3MiB/s-17.6MiB/s (17.1MB/s-18.4MB/s), io=344MiB (361MB), run=5002-5002msec 00:23:18.651 02:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:23:18.651 02:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:18.651 02:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:18.651 02:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:18.651 02:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:18.651 02:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:18.651 02:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.651 02:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:18.651 02:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.651 02:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:18.651 02:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.651 02:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:18.651 02:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.651 02:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:18.651 02:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:18.651 02:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:23:18.651 02:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:18.651 02:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.651 02:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:18.651 02:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.651 02:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:18.651 02:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.651 02:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:18.651 ************************************ 00:23:18.651 END TEST fio_dif_rand_params 00:23:18.651 ************************************ 00:23:18.651 02:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.651 00:23:18.651 real 0m23.108s 00:23:18.651 user 2m2.169s 00:23:18.651 sys 0m8.503s 00:23:18.651 02:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:18.651 02:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:18.651 02:04:28 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:23:18.651 02:04:28 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:18.651 02:04:28 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:18.651 02:04:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:18.651 ************************************ 00:23:18.651 START TEST fio_dif_digest 00:23:18.652 ************************************ 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:18.652 bdev_null0 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:18.652 [2024-11-19 02:04:28.683530] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:18.652 { 00:23:18.652 "params": { 00:23:18.652 "name": "Nvme$subsystem", 00:23:18.652 "trtype": "$TEST_TRANSPORT", 00:23:18.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:18.652 "adrfam": "ipv4", 00:23:18.652 "trsvcid": "$NVMF_PORT", 00:23:18.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:18.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:18.652 "hdgst": ${hdgst:-false}, 00:23:18.652 "ddgst": ${ddgst:-false} 00:23:18.652 }, 00:23:18.652 "method": "bdev_nvme_attach_controller" 00:23:18.652 } 00:23:18.652 EOF 00:23:18.652 )") 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:18.652 "params": { 00:23:18.652 "name": "Nvme0", 00:23:18.652 "trtype": "tcp", 00:23:18.652 "traddr": "10.0.0.3", 00:23:18.652 "adrfam": "ipv4", 00:23:18.652 "trsvcid": "4420", 00:23:18.652 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:18.652 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:18.652 "hdgst": true, 00:23:18.652 "ddgst": true 00:23:18.652 }, 00:23:18.652 "method": "bdev_nvme_attach_controller" 00:23:18.652 }' 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:18.652 02:04:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:18.652 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:18.652 ... 00:23:18.652 fio-3.35 00:23:18.652 Starting 3 threads 00:23:30.860 00:23:30.860 filename0: (groupid=0, jobs=1): err= 0: pid=98177: Tue Nov 19 02:04:39 2024 00:23:30.860 read: IOPS=248, BW=31.0MiB/s (32.5MB/s)(311MiB/10013msec) 00:23:30.860 slat (nsec): min=6288, max=57694, avg=9210.91, stdev=3431.24 00:23:30.860 clat (usec): min=11567, max=14470, avg=12069.81, stdev=468.04 00:23:30.860 lat (usec): min=11575, max=14481, avg=12079.02, stdev=468.42 00:23:30.860 clat percentiles (usec): 00:23:30.860 | 1.00th=[11600], 5.00th=[11731], 10.00th=[11731], 20.00th=[11731], 00:23:30.860 | 30.00th=[11731], 40.00th=[11863], 50.00th=[11863], 60.00th=[11994], 00:23:30.860 | 70.00th=[12125], 80.00th=[12387], 90.00th=[12649], 95.00th=[13173], 00:23:30.860 | 99.00th=[13829], 99.50th=[13960], 99.90th=[14484], 99.95th=[14484], 00:23:30.860 | 99.99th=[14484] 00:23:30.860 bw ( KiB/s): min=30720, max=33024, per=33.34%, avg=31756.80, stdev=672.07, samples=20 00:23:30.860 iops : min= 240, max= 258, avg=248.10, stdev= 5.25, samples=20 00:23:30.860 lat (msec) : 20=100.00% 00:23:30.860 cpu : usr=90.04%, sys=9.46%, ctx=15, majf=0, minf=9 00:23:30.860 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:30.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.860 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.860 issued rwts: total=2484,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:30.860 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:30.860 filename0: (groupid=0, jobs=1): err= 0: pid=98178: Tue Nov 19 02:04:39 2024 00:23:30.860 read: IOPS=248, BW=31.0MiB/s (32.5MB/s)(311MiB/10010msec) 00:23:30.860 slat (nsec): min=6973, max=52143, avg=13969.95, stdev=4095.56 00:23:30.860 clat (usec): min=10181, max=15465, avg=12058.32, stdev=475.80 00:23:30.860 lat (usec): min=10194, max=15479, avg=12072.29, stdev=476.19 00:23:30.860 clat percentiles (usec): 00:23:30.860 | 1.00th=[11600], 5.00th=[11731], 10.00th=[11731], 20.00th=[11731], 00:23:30.860 | 30.00th=[11731], 40.00th=[11863], 50.00th=[11863], 60.00th=[11994], 00:23:30.860 | 70.00th=[11994], 80.00th=[12387], 90.00th=[12649], 95.00th=[13173], 00:23:30.860 | 99.00th=[13829], 99.50th=[13960], 99.90th=[15401], 99.95th=[15401], 00:23:30.860 | 99.99th=[15401] 00:23:30.860 bw ( KiB/s): min=30720, max=33024, per=33.34%, avg=31756.80, stdev=572.28, samples=20 00:23:30.860 iops : min= 240, max= 258, avg=248.10, stdev= 4.47, samples=20 00:23:30.860 lat (msec) : 20=100.00% 00:23:30.860 cpu : usr=90.91%, sys=8.56%, ctx=11, majf=0, minf=0 00:23:30.860 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:30.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.860 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.860 issued rwts: total=2484,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:30.860 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:30.860 filename0: (groupid=0, jobs=1): err= 0: pid=98179: Tue Nov 19 02:04:39 2024 00:23:30.860 read: IOPS=248, BW=31.0MiB/s (32.5MB/s)(311MiB/10010msec) 00:23:30.860 slat (nsec): min=7000, max=46109, avg=13542.11, stdev=4017.75 00:23:30.860 clat (usec): min=10187, max=15459, avg=12059.89, stdev=476.23 00:23:30.860 lat (usec): min=10200, max=15471, avg=12073.43, stdev=476.47 00:23:30.860 clat percentiles (usec): 00:23:30.860 | 1.00th=[11600], 5.00th=[11731], 10.00th=[11731], 20.00th=[11731], 00:23:30.860 | 30.00th=[11731], 40.00th=[11863], 50.00th=[11863], 60.00th=[11994], 00:23:30.860 | 70.00th=[11994], 80.00th=[12387], 90.00th=[12649], 95.00th=[13173], 00:23:30.860 | 99.00th=[13829], 99.50th=[13960], 99.90th=[15401], 99.95th=[15401], 00:23:30.860 | 99.99th=[15401] 00:23:30.860 bw ( KiB/s): min=30720, max=33024, per=33.34%, avg=31756.80, stdev=572.28, samples=20 00:23:30.860 iops : min= 240, max= 258, avg=248.10, stdev= 4.47, samples=20 00:23:30.860 lat (msec) : 20=100.00% 00:23:30.860 cpu : usr=90.81%, sys=8.69%, ctx=5, majf=0, minf=0 00:23:30.860 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:30.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.860 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.860 issued rwts: total=2484,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:30.860 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:30.860 00:23:30.860 Run status group 0 (all jobs): 00:23:30.860 READ: bw=93.0MiB/s (97.5MB/s), 31.0MiB/s-31.0MiB/s (32.5MB/s-32.5MB/s), io=932MiB (977MB), run=10010-10013msec 00:23:30.860 02:04:39 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:23:30.860 02:04:39 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:23:30.860 02:04:39 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:23:30.860 02:04:39 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:30.860 02:04:39 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:23:30.860 02:04:39 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:30.860 02:04:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.860 02:04:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:30.860 02:04:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.860 02:04:39 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:30.860 02:04:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.860 02:04:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:30.860 ************************************ 00:23:30.860 END TEST fio_dif_digest 00:23:30.860 ************************************ 00:23:30.860 02:04:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.860 00:23:30.860 real 0m10.878s 00:23:30.860 user 0m27.763s 00:23:30.860 sys 0m2.900s 00:23:30.860 02:04:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:30.860 02:04:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:30.860 02:04:39 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:23:30.860 02:04:39 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:23:30.860 02:04:39 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:30.860 02:04:39 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:23:30.860 02:04:39 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:30.860 02:04:39 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:23:30.860 02:04:39 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:30.860 02:04:39 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:30.860 rmmod nvme_tcp 00:23:30.860 rmmod nvme_fabrics 00:23:30.860 rmmod nvme_keyring 00:23:30.860 02:04:39 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:30.860 02:04:39 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:23:30.860 02:04:39 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:23:30.860 02:04:39 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 97443 ']' 00:23:30.860 02:04:39 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 97443 00:23:30.860 02:04:39 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 97443 ']' 00:23:30.860 02:04:39 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 97443 00:23:30.860 02:04:39 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:23:30.860 02:04:39 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:30.860 02:04:39 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97443 00:23:30.860 killing process with pid 97443 00:23:30.860 02:04:39 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:30.860 02:04:39 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:30.860 02:04:39 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97443' 00:23:30.860 02:04:39 nvmf_dif -- common/autotest_common.sh@973 -- # kill 97443 00:23:30.860 02:04:39 nvmf_dif -- common/autotest_common.sh@978 -- # wait 97443 00:23:30.860 02:04:39 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:23:30.860 02:04:39 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:30.860 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:30.860 Waiting for block devices as requested 00:23:30.860 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:30.860 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:30.860 02:04:40 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:30.860 02:04:40 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:30.860 02:04:40 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:23:30.860 02:04:40 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:23:30.860 02:04:40 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:30.860 02:04:40 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:23:30.860 02:04:40 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:30.860 02:04:40 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:30.860 02:04:40 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:30.860 02:04:40 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:30.860 02:04:40 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:30.860 02:04:40 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:30.860 02:04:40 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:30.860 02:04:40 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:30.860 02:04:40 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:30.860 02:04:40 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:30.860 02:04:40 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:30.860 02:04:40 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:30.860 02:04:40 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:30.860 02:04:40 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:30.860 02:04:40 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:30.860 02:04:40 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:30.860 02:04:40 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.860 02:04:40 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:30.860 02:04:40 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.860 02:04:40 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:23:30.860 00:23:30.860 real 0m58.505s 00:23:30.860 user 3m45.249s 00:23:30.860 sys 0m19.840s 00:23:30.861 ************************************ 00:23:30.861 END TEST nvmf_dif 00:23:30.861 ************************************ 00:23:30.861 02:04:40 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:30.861 02:04:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:30.861 02:04:40 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:30.861 02:04:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:30.861 02:04:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:30.861 02:04:40 -- common/autotest_common.sh@10 -- # set +x 00:23:30.861 ************************************ 00:23:30.861 START TEST nvmf_abort_qd_sizes 00:23:30.861 ************************************ 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:30.861 * Looking for test storage... 00:23:30.861 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:30.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.861 --rc genhtml_branch_coverage=1 00:23:30.861 --rc genhtml_function_coverage=1 00:23:30.861 --rc genhtml_legend=1 00:23:30.861 --rc geninfo_all_blocks=1 00:23:30.861 --rc geninfo_unexecuted_blocks=1 00:23:30.861 00:23:30.861 ' 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:30.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.861 --rc genhtml_branch_coverage=1 00:23:30.861 --rc genhtml_function_coverage=1 00:23:30.861 --rc genhtml_legend=1 00:23:30.861 --rc geninfo_all_blocks=1 00:23:30.861 --rc geninfo_unexecuted_blocks=1 00:23:30.861 00:23:30.861 ' 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:30.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.861 --rc genhtml_branch_coverage=1 00:23:30.861 --rc genhtml_function_coverage=1 00:23:30.861 --rc genhtml_legend=1 00:23:30.861 --rc geninfo_all_blocks=1 00:23:30.861 --rc geninfo_unexecuted_blocks=1 00:23:30.861 00:23:30.861 ' 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:30.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.861 --rc genhtml_branch_coverage=1 00:23:30.861 --rc genhtml_function_coverage=1 00:23:30.861 --rc genhtml_legend=1 00:23:30.861 --rc geninfo_all_blocks=1 00:23:30.861 --rc geninfo_unexecuted_blocks=1 00:23:30.861 00:23:30.861 ' 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:30.861 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.861 02:04:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:30.862 Cannot find device "nvmf_init_br" 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:30.862 Cannot find device "nvmf_init_br2" 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:30.862 Cannot find device "nvmf_tgt_br" 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:30.862 Cannot find device "nvmf_tgt_br2" 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:30.862 Cannot find device "nvmf_init_br" 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:30.862 Cannot find device "nvmf_init_br2" 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:30.862 Cannot find device "nvmf_tgt_br" 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:30.862 Cannot find device "nvmf_tgt_br2" 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:30.862 Cannot find device "nvmf_br" 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:23:30.862 02:04:40 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:30.862 Cannot find device "nvmf_init_if" 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:30.862 Cannot find device "nvmf_init_if2" 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:30.862 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:30.862 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:30.862 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:30.862 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:23:30.862 00:23:30.862 --- 10.0.0.3 ping statistics --- 00:23:30.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.862 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:30.862 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:30.862 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:23:30.862 00:23:30.862 --- 10.0.0.4 ping statistics --- 00:23:30.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.862 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:30.862 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:30.862 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:23:30.862 00:23:30.862 --- 10.0.0.1 ping statistics --- 00:23:30.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.862 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:30.862 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:30.862 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:23:30.862 00:23:30.862 --- 10.0.0.2 ping statistics --- 00:23:30.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.862 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:23:30.862 02:04:41 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:31.429 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:31.688 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:31.688 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:31.688 02:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:31.688 02:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:31.688 02:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:31.688 02:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:31.688 02:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:31.688 02:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:31.688 02:04:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:23:31.688 02:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:31.689 02:04:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:31.689 02:04:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:31.689 02:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=98824 00:23:31.689 02:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 98824 00:23:31.689 02:04:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 98824 ']' 00:23:31.689 02:04:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:31.689 02:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:23:31.689 02:04:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:31.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:31.689 02:04:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:31.689 02:04:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:31.689 02:04:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:31.689 [2024-11-19 02:04:42.287707] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:23:31.689 [2024-11-19 02:04:42.287794] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:31.948 [2024-11-19 02:04:42.437575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:31.948 [2024-11-19 02:04:42.464613] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:31.948 [2024-11-19 02:04:42.464671] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:31.948 [2024-11-19 02:04:42.464686] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:31.948 [2024-11-19 02:04:42.464696] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:31.948 [2024-11-19 02:04:42.464705] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:31.948 [2024-11-19 02:04:42.465713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:31.948 [2024-11-19 02:04:42.465784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:31.948 [2024-11-19 02:04:42.466340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:31.948 [2024-11-19 02:04:42.466378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.948 [2024-11-19 02:04:42.503121] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:31.948 02:04:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:31.948 02:04:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:23:31.948 02:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:31.948 02:04:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:31.948 02:04:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:32.207 02:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.207 02:04:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:32.208 02:04:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:32.208 ************************************ 00:23:32.208 START TEST spdk_target_abort 00:23:32.208 ************************************ 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:32.208 spdk_targetn1 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:32.208 [2024-11-19 02:04:42.715463] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:32.208 [2024-11-19 02:04:42.752680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:32.208 02:04:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:35.496 Initializing NVMe Controllers 00:23:35.496 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:23:35.496 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:35.496 Initialization complete. Launching workers. 00:23:35.496 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10289, failed: 0 00:23:35.496 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1086, failed to submit 9203 00:23:35.496 success 806, unsuccessful 280, failed 0 00:23:35.496 02:04:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:35.496 02:04:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:38.785 Initializing NVMe Controllers 00:23:38.785 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:23:38.785 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:38.785 Initialization complete. Launching workers. 00:23:38.785 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8962, failed: 0 00:23:38.785 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1144, failed to submit 7818 00:23:38.785 success 393, unsuccessful 751, failed 0 00:23:38.785 02:04:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:38.785 02:04:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:42.073 Initializing NVMe Controllers 00:23:42.073 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:23:42.073 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:42.073 Initialization complete. Launching workers. 00:23:42.073 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31512, failed: 0 00:23:42.073 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2349, failed to submit 29163 00:23:42.073 success 472, unsuccessful 1877, failed 0 00:23:42.073 02:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:23:42.073 02:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.073 02:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:42.073 02:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.073 02:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:23:42.073 02:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.073 02:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:42.332 02:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.332 02:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 98824 00:23:42.332 02:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 98824 ']' 00:23:42.332 02:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 98824 00:23:42.332 02:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:23:42.332 02:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:42.332 02:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98824 00:23:42.332 02:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:42.332 02:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:42.333 killing process with pid 98824 00:23:42.333 02:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98824' 00:23:42.333 02:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 98824 00:23:42.333 02:04:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 98824 00:23:42.592 00:23:42.592 real 0m10.412s 00:23:42.592 user 0m39.741s 00:23:42.592 sys 0m2.008s 00:23:42.592 02:04:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:42.592 02:04:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:42.592 ************************************ 00:23:42.592 END TEST spdk_target_abort 00:23:42.592 ************************************ 00:23:42.592 02:04:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:23:42.592 02:04:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:42.592 02:04:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:42.592 02:04:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:42.592 ************************************ 00:23:42.592 START TEST kernel_target_abort 00:23:42.592 ************************************ 00:23:42.592 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:23:42.592 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:23:42.592 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:23:42.592 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:42.592 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:42.592 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.592 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.592 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:42.592 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.592 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:42.592 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:42.592 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:42.592 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:42.592 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:42.592 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:23:42.592 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:42.592 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:42.592 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:42.592 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:23:42.592 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:23:42.592 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:23:42.592 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:42.592 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:42.850 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:43.108 Waiting for block devices as requested 00:23:43.108 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:43.108 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:43.108 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:43.108 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:43.108 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:23:43.108 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:23:43.108 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:43.108 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:43.108 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:23:43.108 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:23:43.108 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:23:43.367 No valid GPT data, bailing 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:23:43.367 No valid GPT data, bailing 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:23:43.367 No valid GPT data, bailing 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:23:43.367 No valid GPT data, bailing 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:23:43.367 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:23:43.368 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:23:43.368 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:23:43.368 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:23:43.368 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:23:43.368 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:23:43.368 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:43.626 02:04:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 --hostid=7cdc77f7-6c10-48d3-83fa-703a290bdf89 -a 10.0.0.1 -t tcp -s 4420 00:23:43.626 00:23:43.626 Discovery Log Number of Records 2, Generation counter 2 00:23:43.626 =====Discovery Log Entry 0====== 00:23:43.626 trtype: tcp 00:23:43.626 adrfam: ipv4 00:23:43.626 subtype: current discovery subsystem 00:23:43.626 treq: not specified, sq flow control disable supported 00:23:43.626 portid: 1 00:23:43.626 trsvcid: 4420 00:23:43.626 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:43.626 traddr: 10.0.0.1 00:23:43.626 eflags: none 00:23:43.626 sectype: none 00:23:43.626 =====Discovery Log Entry 1====== 00:23:43.626 trtype: tcp 00:23:43.626 adrfam: ipv4 00:23:43.626 subtype: nvme subsystem 00:23:43.626 treq: not specified, sq flow control disable supported 00:23:43.626 portid: 1 00:23:43.626 trsvcid: 4420 00:23:43.626 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:43.626 traddr: 10.0.0.1 00:23:43.626 eflags: none 00:23:43.626 sectype: none 00:23:43.626 02:04:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:23:43.626 02:04:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:43.626 02:04:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:43.626 02:04:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:23:43.626 02:04:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:43.626 02:04:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:43.626 02:04:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:43.626 02:04:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:43.626 02:04:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:43.626 02:04:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:43.626 02:04:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:43.626 02:04:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:43.626 02:04:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:43.626 02:04:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:43.626 02:04:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:23:43.626 02:04:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:43.626 02:04:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:23:43.626 02:04:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:43.626 02:04:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:43.626 02:04:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:43.626 02:04:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:46.914 Initializing NVMe Controllers 00:23:46.914 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:46.914 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:46.914 Initialization complete. Launching workers. 00:23:46.914 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32153, failed: 0 00:23:46.914 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32153, failed to submit 0 00:23:46.914 success 0, unsuccessful 32153, failed 0 00:23:46.914 02:04:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:46.914 02:04:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:50.199 Initializing NVMe Controllers 00:23:50.199 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:50.199 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:50.199 Initialization complete. Launching workers. 00:23:50.199 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 63162, failed: 0 00:23:50.200 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25905, failed to submit 37257 00:23:50.200 success 0, unsuccessful 25905, failed 0 00:23:50.200 02:05:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:50.200 02:05:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:53.489 Initializing NVMe Controllers 00:23:53.489 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:53.490 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:53.490 Initialization complete. Launching workers. 00:23:53.490 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67904, failed: 0 00:23:53.490 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16958, failed to submit 50946 00:23:53.490 success 0, unsuccessful 16958, failed 0 00:23:53.490 02:05:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:23:53.490 02:05:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:53.490 02:05:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:23:53.490 02:05:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:53.490 02:05:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:53.490 02:05:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:53.490 02:05:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:53.490 02:05:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:23:53.490 02:05:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:23:53.490 02:05:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:53.748 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:54.685 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:54.685 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:54.685 00:23:54.685 real 0m12.009s 00:23:54.685 user 0m5.751s 00:23:54.685 sys 0m3.617s 00:23:54.685 02:05:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:54.685 02:05:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:54.685 ************************************ 00:23:54.685 END TEST kernel_target_abort 00:23:54.685 ************************************ 00:23:54.685 02:05:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:54.685 02:05:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:23:54.685 02:05:05 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:54.685 02:05:05 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:23:54.685 02:05:05 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:54.685 02:05:05 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:23:54.685 02:05:05 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:54.685 02:05:05 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:54.685 rmmod nvme_tcp 00:23:54.685 rmmod nvme_fabrics 00:23:54.685 rmmod nvme_keyring 00:23:54.685 02:05:05 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:54.685 02:05:05 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:23:54.685 02:05:05 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:23:54.685 02:05:05 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 98824 ']' 00:23:54.685 02:05:05 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 98824 00:23:54.685 02:05:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 98824 ']' 00:23:54.685 02:05:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 98824 00:23:54.685 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (98824) - No such process 00:23:54.685 Process with pid 98824 is not found 00:23:54.685 02:05:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 98824 is not found' 00:23:54.685 02:05:05 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:23:54.685 02:05:05 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:55.252 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:55.252 Waiting for block devices as requested 00:23:55.252 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:55.252 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:55.252 02:05:05 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:55.252 02:05:05 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:55.252 02:05:05 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:23:55.252 02:05:05 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:55.252 02:05:05 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:23:55.252 02:05:05 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:23:55.252 02:05:05 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:55.252 02:05:05 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:55.252 02:05:05 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:55.512 02:05:05 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:55.512 02:05:05 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:55.512 02:05:05 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:55.512 02:05:05 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:55.512 02:05:05 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:55.512 02:05:05 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:55.512 02:05:05 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:55.512 02:05:05 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:55.512 02:05:06 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:55.512 02:05:06 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:55.512 02:05:06 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:55.512 02:05:06 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:55.512 02:05:06 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:55.512 02:05:06 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.512 02:05:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:55.512 02:05:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.771 02:05:06 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:23:55.771 00:23:55.771 real 0m25.480s 00:23:55.771 user 0m46.651s 00:23:55.771 sys 0m7.126s 00:23:55.771 02:05:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:55.771 02:05:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:55.771 ************************************ 00:23:55.771 END TEST nvmf_abort_qd_sizes 00:23:55.771 ************************************ 00:23:55.771 02:05:06 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:23:55.771 02:05:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:55.771 02:05:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:55.771 02:05:06 -- common/autotest_common.sh@10 -- # set +x 00:23:55.771 ************************************ 00:23:55.771 START TEST keyring_file 00:23:55.771 ************************************ 00:23:55.771 02:05:06 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:23:55.771 * Looking for test storage... 00:23:55.771 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:23:55.771 02:05:06 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:55.771 02:05:06 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:23:55.771 02:05:06 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:55.771 02:05:06 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:55.771 02:05:06 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:55.771 02:05:06 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:55.771 02:05:06 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:55.771 02:05:06 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:23:55.771 02:05:06 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:23:55.771 02:05:06 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:23:55.771 02:05:06 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:23:55.771 02:05:06 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:23:55.771 02:05:06 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:23:55.771 02:05:06 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:23:55.771 02:05:06 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:55.771 02:05:06 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:23:55.771 02:05:06 keyring_file -- scripts/common.sh@345 -- # : 1 00:23:55.771 02:05:06 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:55.771 02:05:06 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:55.771 02:05:06 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:23:55.771 02:05:06 keyring_file -- scripts/common.sh@353 -- # local d=1 00:23:55.771 02:05:06 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:55.771 02:05:06 keyring_file -- scripts/common.sh@355 -- # echo 1 00:23:55.771 02:05:06 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:23:55.771 02:05:06 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:23:56.031 02:05:06 keyring_file -- scripts/common.sh@353 -- # local d=2 00:23:56.031 02:05:06 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:56.031 02:05:06 keyring_file -- scripts/common.sh@355 -- # echo 2 00:23:56.031 02:05:06 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:23:56.031 02:05:06 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:56.032 02:05:06 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:56.032 02:05:06 keyring_file -- scripts/common.sh@368 -- # return 0 00:23:56.032 02:05:06 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:56.032 02:05:06 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:56.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.032 --rc genhtml_branch_coverage=1 00:23:56.032 --rc genhtml_function_coverage=1 00:23:56.032 --rc genhtml_legend=1 00:23:56.032 --rc geninfo_all_blocks=1 00:23:56.032 --rc geninfo_unexecuted_blocks=1 00:23:56.032 00:23:56.032 ' 00:23:56.032 02:05:06 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:56.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.032 --rc genhtml_branch_coverage=1 00:23:56.032 --rc genhtml_function_coverage=1 00:23:56.032 --rc genhtml_legend=1 00:23:56.032 --rc geninfo_all_blocks=1 00:23:56.032 --rc geninfo_unexecuted_blocks=1 00:23:56.032 00:23:56.032 ' 00:23:56.032 02:05:06 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:56.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.032 --rc genhtml_branch_coverage=1 00:23:56.032 --rc genhtml_function_coverage=1 00:23:56.032 --rc genhtml_legend=1 00:23:56.032 --rc geninfo_all_blocks=1 00:23:56.032 --rc geninfo_unexecuted_blocks=1 00:23:56.032 00:23:56.032 ' 00:23:56.032 02:05:06 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:56.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.032 --rc genhtml_branch_coverage=1 00:23:56.032 --rc genhtml_function_coverage=1 00:23:56.032 --rc genhtml_legend=1 00:23:56.032 --rc geninfo_all_blocks=1 00:23:56.032 --rc geninfo_unexecuted_blocks=1 00:23:56.032 00:23:56.032 ' 00:23:56.032 02:05:06 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:23:56.032 02:05:06 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:56.032 02:05:06 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:23:56.032 02:05:06 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:56.032 02:05:06 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:56.032 02:05:06 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:56.032 02:05:06 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.032 02:05:06 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.032 02:05:06 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.032 02:05:06 keyring_file -- paths/export.sh@5 -- # export PATH 00:23:56.032 02:05:06 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@51 -- # : 0 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:56.032 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:56.032 02:05:06 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:23:56.032 02:05:06 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:23:56.032 02:05:06 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:23:56.032 02:05:06 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:23:56.032 02:05:06 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:23:56.032 02:05:06 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:23:56.032 02:05:06 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:23:56.032 02:05:06 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:23:56.032 02:05:06 keyring_file -- keyring/common.sh@17 -- # name=key0 00:23:56.032 02:05:06 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:56.032 02:05:06 keyring_file -- keyring/common.sh@17 -- # digest=0 00:23:56.032 02:05:06 keyring_file -- keyring/common.sh@18 -- # mktemp 00:23:56.032 02:05:06 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.0GnqJ3YpBu 00:23:56.032 02:05:06 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@733 -- # python - 00:23:56.032 02:05:06 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.0GnqJ3YpBu 00:23:56.032 02:05:06 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.0GnqJ3YpBu 00:23:56.032 02:05:06 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.0GnqJ3YpBu 00:23:56.032 02:05:06 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:23:56.032 02:05:06 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:23:56.032 02:05:06 keyring_file -- keyring/common.sh@17 -- # name=key1 00:23:56.032 02:05:06 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:23:56.032 02:05:06 keyring_file -- keyring/common.sh@17 -- # digest=0 00:23:56.032 02:05:06 keyring_file -- keyring/common.sh@18 -- # mktemp 00:23:56.032 02:05:06 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.tuk20wEYzU 00:23:56.032 02:05:06 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:23:56.032 02:05:06 keyring_file -- nvmf/common.sh@733 -- # python - 00:23:56.032 02:05:06 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.tuk20wEYzU 00:23:56.032 02:05:06 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.tuk20wEYzU 00:23:56.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.032 02:05:06 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.tuk20wEYzU 00:23:56.032 02:05:06 keyring_file -- keyring/file.sh@30 -- # tgtpid=99722 00:23:56.032 02:05:06 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:56.032 02:05:06 keyring_file -- keyring/file.sh@32 -- # waitforlisten 99722 00:23:56.032 02:05:06 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 99722 ']' 00:23:56.032 02:05:06 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.032 02:05:06 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:56.032 02:05:06 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.032 02:05:06 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:56.032 02:05:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:56.033 [2024-11-19 02:05:06.611437] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:23:56.033 [2024-11-19 02:05:06.611728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99722 ] 00:23:56.299 [2024-11-19 02:05:06.763156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.299 [2024-11-19 02:05:06.789858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.299 [2024-11-19 02:05:06.836161] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:56.634 02:05:06 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:56.634 02:05:06 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:23:56.634 02:05:06 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:23:56.634 02:05:06 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.634 02:05:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:56.634 [2024-11-19 02:05:06.989397] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:56.634 null0 00:23:56.634 [2024-11-19 02:05:07.021386] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:56.634 [2024-11-19 02:05:07.021762] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:56.634 02:05:07 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.634 02:05:07 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:56.635 02:05:07 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:23:56.635 02:05:07 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:56.635 02:05:07 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:56.635 02:05:07 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:56.635 02:05:07 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:56.635 02:05:07 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:56.635 02:05:07 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:56.635 02:05:07 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.635 02:05:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:56.635 [2024-11-19 02:05:07.049376] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:23:56.635 request: 00:23:56.635 { 00:23:56.635 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:23:56.635 "secure_channel": false, 00:23:56.635 "listen_address": { 00:23:56.635 "trtype": "tcp", 00:23:56.635 "traddr": "127.0.0.1", 00:23:56.635 "trsvcid": "4420" 00:23:56.635 }, 00:23:56.635 "method": "nvmf_subsystem_add_listener", 00:23:56.635 "req_id": 1 00:23:56.635 } 00:23:56.635 Got JSON-RPC error response 00:23:56.635 response: 00:23:56.635 { 00:23:56.635 "code": -32602, 00:23:56.635 "message": "Invalid parameters" 00:23:56.635 } 00:23:56.635 02:05:07 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:56.635 02:05:07 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:23:56.635 02:05:07 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:56.635 02:05:07 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:56.635 02:05:07 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:56.635 02:05:07 keyring_file -- keyring/file.sh@47 -- # bperfpid=99733 00:23:56.635 02:05:07 keyring_file -- keyring/file.sh@49 -- # waitforlisten 99733 /var/tmp/bperf.sock 00:23:56.635 02:05:07 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 99733 ']' 00:23:56.635 02:05:07 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:56.635 02:05:07 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:56.635 02:05:07 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:23:56.635 02:05:07 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:56.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:56.635 02:05:07 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:56.635 02:05:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:56.635 [2024-11-19 02:05:07.138569] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:23:56.635 [2024-11-19 02:05:07.139152] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99733 ] 00:23:56.898 [2024-11-19 02:05:07.301051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.898 [2024-11-19 02:05:07.325757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.898 [2024-11-19 02:05:07.359537] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:56.898 02:05:07 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:56.898 02:05:07 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:23:56.898 02:05:07 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0GnqJ3YpBu 00:23:56.898 02:05:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0GnqJ3YpBu 00:23:57.157 02:05:07 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.tuk20wEYzU 00:23:57.157 02:05:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.tuk20wEYzU 00:23:57.417 02:05:07 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:23:57.417 02:05:07 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:23:57.417 02:05:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:57.417 02:05:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:57.417 02:05:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:57.676 02:05:08 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.0GnqJ3YpBu == \/\t\m\p\/\t\m\p\.\0\G\n\q\J\3\Y\p\B\u ]] 00:23:57.676 02:05:08 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:23:57.676 02:05:08 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:23:57.676 02:05:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:57.676 02:05:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:57.676 02:05:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:57.935 02:05:08 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.tuk20wEYzU == \/\t\m\p\/\t\m\p\.\t\u\k\2\0\w\E\Y\z\U ]] 00:23:57.935 02:05:08 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:23:57.935 02:05:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:57.935 02:05:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:57.935 02:05:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:57.935 02:05:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:57.935 02:05:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:58.194 02:05:08 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:23:58.194 02:05:08 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:23:58.194 02:05:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:58.194 02:05:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:58.194 02:05:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:58.194 02:05:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:58.194 02:05:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:58.453 02:05:08 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:23:58.453 02:05:08 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:58.453 02:05:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:58.720 [2024-11-19 02:05:09.138155] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:58.720 nvme0n1 00:23:58.720 02:05:09 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:23:58.720 02:05:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:58.720 02:05:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:58.720 02:05:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:58.720 02:05:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:58.720 02:05:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:58.980 02:05:09 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:23:58.980 02:05:09 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:23:58.980 02:05:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:58.980 02:05:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:58.980 02:05:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:58.980 02:05:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:58.980 02:05:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:59.239 02:05:09 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:23:59.239 02:05:09 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:59.239 Running I/O for 1 seconds... 00:24:00.616 13551.00 IOPS, 52.93 MiB/s 00:24:00.616 Latency(us) 00:24:00.616 [2024-11-19T02:05:11.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.616 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:24:00.616 nvme0n1 : 1.01 13568.01 53.00 0.00 0.00 9396.95 4915.20 15728.64 00:24:00.616 [2024-11-19T02:05:11.231Z] =================================================================================================================== 00:24:00.616 [2024-11-19T02:05:11.231Z] Total : 13568.01 53.00 0.00 0.00 9396.95 4915.20 15728.64 00:24:00.616 { 00:24:00.616 "results": [ 00:24:00.616 { 00:24:00.616 "job": "nvme0n1", 00:24:00.616 "core_mask": "0x2", 00:24:00.616 "workload": "randrw", 00:24:00.616 "percentage": 50, 00:24:00.616 "status": "finished", 00:24:00.616 "queue_depth": 128, 00:24:00.616 "io_size": 4096, 00:24:00.616 "runtime": 1.008254, 00:24:00.616 "iops": 13568.009648362417, 00:24:00.616 "mibps": 53.00003768891569, 00:24:00.616 "io_failed": 0, 00:24:00.616 "io_timeout": 0, 00:24:00.616 "avg_latency_us": 9396.953484316851, 00:24:00.616 "min_latency_us": 4915.2, 00:24:00.616 "max_latency_us": 15728.64 00:24:00.616 } 00:24:00.616 ], 00:24:00.616 "core_count": 1 00:24:00.616 } 00:24:00.616 02:05:10 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:00.616 02:05:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:00.616 02:05:11 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:24:00.617 02:05:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:00.617 02:05:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:00.617 02:05:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:00.617 02:05:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:00.617 02:05:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:00.875 02:05:11 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:24:00.875 02:05:11 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:24:00.875 02:05:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:00.875 02:05:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:00.875 02:05:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:00.875 02:05:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:00.875 02:05:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:01.134 02:05:11 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:24:01.134 02:05:11 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:01.134 02:05:11 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:24:01.134 02:05:11 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:01.134 02:05:11 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:24:01.134 02:05:11 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:01.134 02:05:11 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:24:01.134 02:05:11 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:01.134 02:05:11 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:01.134 02:05:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:01.392 [2024-11-19 02:05:11.905564] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:01.392 [2024-11-19 02:05:11.905815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cdf50 (107): Transport endpoint is not connected 00:24:01.392 [2024-11-19 02:05:11.906808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11cdf50 (9): Bad file descriptor 00:24:01.392 [2024-11-19 02:05:11.907806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:24:01.392 [2024-11-19 02:05:11.908023] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:24:01.392 [2024-11-19 02:05:11.908141] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:24:01.392 [2024-11-19 02:05:11.908267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:24:01.392 request: 00:24:01.392 { 00:24:01.392 "name": "nvme0", 00:24:01.392 "trtype": "tcp", 00:24:01.392 "traddr": "127.0.0.1", 00:24:01.392 "adrfam": "ipv4", 00:24:01.392 "trsvcid": "4420", 00:24:01.392 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:01.392 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:01.392 "prchk_reftag": false, 00:24:01.392 "prchk_guard": false, 00:24:01.392 "hdgst": false, 00:24:01.392 "ddgst": false, 00:24:01.392 "psk": "key1", 00:24:01.392 "allow_unrecognized_csi": false, 00:24:01.392 "method": "bdev_nvme_attach_controller", 00:24:01.392 "req_id": 1 00:24:01.392 } 00:24:01.392 Got JSON-RPC error response 00:24:01.392 response: 00:24:01.392 { 00:24:01.392 "code": -5, 00:24:01.392 "message": "Input/output error" 00:24:01.392 } 00:24:01.392 02:05:11 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:24:01.392 02:05:11 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:01.392 02:05:11 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:01.392 02:05:11 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:01.392 02:05:11 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:24:01.392 02:05:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:01.392 02:05:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:01.392 02:05:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:01.392 02:05:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:01.392 02:05:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:01.650 02:05:12 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:24:01.650 02:05:12 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:24:01.650 02:05:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:01.650 02:05:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:01.650 02:05:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:01.650 02:05:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:01.650 02:05:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:01.907 02:05:12 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:24:01.907 02:05:12 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:24:01.907 02:05:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:02.165 02:05:12 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:24:02.165 02:05:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:24:02.493 02:05:12 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:24:02.493 02:05:12 keyring_file -- keyring/file.sh@78 -- # jq length 00:24:02.493 02:05:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:02.493 02:05:13 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:24:02.493 02:05:13 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.0GnqJ3YpBu 00:24:02.493 02:05:13 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.0GnqJ3YpBu 00:24:02.493 02:05:13 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:24:02.493 02:05:13 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.0GnqJ3YpBu 00:24:02.493 02:05:13 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:24:02.493 02:05:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:02.493 02:05:13 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:24:02.493 02:05:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:02.493 02:05:13 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0GnqJ3YpBu 00:24:02.493 02:05:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0GnqJ3YpBu 00:24:02.753 [2024-11-19 02:05:13.320671] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.0GnqJ3YpBu': 0100660 00:24:02.753 [2024-11-19 02:05:13.320708] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:02.753 request: 00:24:02.753 { 00:24:02.753 "name": "key0", 00:24:02.753 "path": "/tmp/tmp.0GnqJ3YpBu", 00:24:02.753 "method": "keyring_file_add_key", 00:24:02.753 "req_id": 1 00:24:02.753 } 00:24:02.753 Got JSON-RPC error response 00:24:02.753 response: 00:24:02.753 { 00:24:02.753 "code": -1, 00:24:02.753 "message": "Operation not permitted" 00:24:02.753 } 00:24:02.753 02:05:13 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:24:02.753 02:05:13 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:02.753 02:05:13 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:02.753 02:05:13 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:02.753 02:05:13 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.0GnqJ3YpBu 00:24:02.753 02:05:13 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.0GnqJ3YpBu 00:24:02.753 02:05:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.0GnqJ3YpBu 00:24:03.011 02:05:13 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.0GnqJ3YpBu 00:24:03.011 02:05:13 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:24:03.011 02:05:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:03.011 02:05:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:03.011 02:05:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:03.011 02:05:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:03.011 02:05:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:03.270 02:05:13 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:24:03.270 02:05:13 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:03.270 02:05:13 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:24:03.270 02:05:13 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:03.270 02:05:13 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:24:03.270 02:05:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:03.270 02:05:13 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:24:03.270 02:05:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:03.270 02:05:13 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:03.270 02:05:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:03.529 [2024-11-19 02:05:14.092849] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.0GnqJ3YpBu': No such file or directory 00:24:03.529 [2024-11-19 02:05:14.092885] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:24:03.529 [2024-11-19 02:05:14.092920] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:24:03.529 [2024-11-19 02:05:14.092929] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:24:03.529 [2024-11-19 02:05:14.092937] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:03.529 [2024-11-19 02:05:14.092944] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:24:03.529 request: 00:24:03.529 { 00:24:03.529 "name": "nvme0", 00:24:03.529 "trtype": "tcp", 00:24:03.529 "traddr": "127.0.0.1", 00:24:03.529 "adrfam": "ipv4", 00:24:03.529 "trsvcid": "4420", 00:24:03.529 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:03.529 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:03.529 "prchk_reftag": false, 00:24:03.529 "prchk_guard": false, 00:24:03.529 "hdgst": false, 00:24:03.529 "ddgst": false, 00:24:03.529 "psk": "key0", 00:24:03.529 "allow_unrecognized_csi": false, 00:24:03.529 "method": "bdev_nvme_attach_controller", 00:24:03.529 "req_id": 1 00:24:03.529 } 00:24:03.529 Got JSON-RPC error response 00:24:03.529 response: 00:24:03.529 { 00:24:03.529 "code": -19, 00:24:03.529 "message": "No such device" 00:24:03.529 } 00:24:03.529 02:05:14 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:24:03.529 02:05:14 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:03.529 02:05:14 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:03.529 02:05:14 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:03.529 02:05:14 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:24:03.529 02:05:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:03.788 02:05:14 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:03.788 02:05:14 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:03.788 02:05:14 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:03.788 02:05:14 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:03.788 02:05:14 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:03.788 02:05:14 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:03.788 02:05:14 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Vg7FfbuyU0 00:24:03.788 02:05:14 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:03.788 02:05:14 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:03.788 02:05:14 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:24:03.788 02:05:14 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:03.788 02:05:14 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:24:03.788 02:05:14 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:24:03.788 02:05:14 keyring_file -- nvmf/common.sh@733 -- # python - 00:24:03.788 02:05:14 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Vg7FfbuyU0 00:24:03.788 02:05:14 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Vg7FfbuyU0 00:24:03.788 02:05:14 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.Vg7FfbuyU0 00:24:03.788 02:05:14 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Vg7FfbuyU0 00:24:03.788 02:05:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Vg7FfbuyU0 00:24:04.047 02:05:14 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:04.047 02:05:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:04.306 nvme0n1 00:24:04.306 02:05:14 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:24:04.306 02:05:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:04.306 02:05:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:04.306 02:05:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:04.306 02:05:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:04.306 02:05:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:04.874 02:05:15 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:24:04.874 02:05:15 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:24:04.874 02:05:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:24:04.874 02:05:15 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:24:04.874 02:05:15 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:24:04.874 02:05:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:04.874 02:05:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:04.874 02:05:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:05.133 02:05:15 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:24:05.133 02:05:15 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:24:05.133 02:05:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:05.133 02:05:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:05.133 02:05:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:05.133 02:05:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:05.133 02:05:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:05.392 02:05:15 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:24:05.392 02:05:15 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:05.392 02:05:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:05.650 02:05:16 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:24:05.650 02:05:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:05.650 02:05:16 keyring_file -- keyring/file.sh@105 -- # jq length 00:24:05.909 02:05:16 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:24:05.909 02:05:16 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Vg7FfbuyU0 00:24:05.909 02:05:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Vg7FfbuyU0 00:24:06.168 02:05:16 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.tuk20wEYzU 00:24:06.168 02:05:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.tuk20wEYzU 00:24:06.426 02:05:16 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:06.426 02:05:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:06.685 nvme0n1 00:24:06.685 02:05:17 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:24:06.685 02:05:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:24:06.945 02:05:17 keyring_file -- keyring/file.sh@113 -- # config='{ 00:24:06.945 "subsystems": [ 00:24:06.945 { 00:24:06.945 "subsystem": "keyring", 00:24:06.945 "config": [ 00:24:06.945 { 00:24:06.945 "method": "keyring_file_add_key", 00:24:06.945 "params": { 00:24:06.945 "name": "key0", 00:24:06.945 "path": "/tmp/tmp.Vg7FfbuyU0" 00:24:06.945 } 00:24:06.945 }, 00:24:06.945 { 00:24:06.945 "method": "keyring_file_add_key", 00:24:06.945 "params": { 00:24:06.945 "name": "key1", 00:24:06.945 "path": "/tmp/tmp.tuk20wEYzU" 00:24:06.945 } 00:24:06.945 } 00:24:06.945 ] 00:24:06.945 }, 00:24:06.945 { 00:24:06.945 "subsystem": "iobuf", 00:24:06.945 "config": [ 00:24:06.945 { 00:24:06.945 "method": "iobuf_set_options", 00:24:06.945 "params": { 00:24:06.945 "small_pool_count": 8192, 00:24:06.945 "large_pool_count": 1024, 00:24:06.945 "small_bufsize": 8192, 00:24:06.945 "large_bufsize": 135168, 00:24:06.945 "enable_numa": false 00:24:06.945 } 00:24:06.945 } 00:24:06.945 ] 00:24:06.945 }, 00:24:06.945 { 00:24:06.945 "subsystem": "sock", 00:24:06.945 "config": [ 00:24:06.945 { 00:24:06.945 "method": "sock_set_default_impl", 00:24:06.945 "params": { 00:24:06.945 "impl_name": "uring" 00:24:06.945 } 00:24:06.945 }, 00:24:06.945 { 00:24:06.945 "method": "sock_impl_set_options", 00:24:06.945 "params": { 00:24:06.945 "impl_name": "ssl", 00:24:06.945 "recv_buf_size": 4096, 00:24:06.945 "send_buf_size": 4096, 00:24:06.945 "enable_recv_pipe": true, 00:24:06.945 "enable_quickack": false, 00:24:06.945 "enable_placement_id": 0, 00:24:06.945 "enable_zerocopy_send_server": true, 00:24:06.945 "enable_zerocopy_send_client": false, 00:24:06.945 "zerocopy_threshold": 0, 00:24:06.945 "tls_version": 0, 00:24:06.945 "enable_ktls": false 00:24:06.945 } 00:24:06.945 }, 00:24:06.945 { 00:24:06.945 "method": "sock_impl_set_options", 00:24:06.945 "params": { 00:24:06.945 "impl_name": "posix", 00:24:06.945 "recv_buf_size": 2097152, 00:24:06.945 "send_buf_size": 2097152, 00:24:06.945 "enable_recv_pipe": true, 00:24:06.945 "enable_quickack": false, 00:24:06.945 "enable_placement_id": 0, 00:24:06.945 "enable_zerocopy_send_server": true, 00:24:06.945 "enable_zerocopy_send_client": false, 00:24:06.945 "zerocopy_threshold": 0, 00:24:06.945 "tls_version": 0, 00:24:06.945 "enable_ktls": false 00:24:06.945 } 00:24:06.945 }, 00:24:06.945 { 00:24:06.945 "method": "sock_impl_set_options", 00:24:06.945 "params": { 00:24:06.945 "impl_name": "uring", 00:24:06.945 "recv_buf_size": 2097152, 00:24:06.945 "send_buf_size": 2097152, 00:24:06.945 "enable_recv_pipe": true, 00:24:06.945 "enable_quickack": false, 00:24:06.945 "enable_placement_id": 0, 00:24:06.945 "enable_zerocopy_send_server": false, 00:24:06.945 "enable_zerocopy_send_client": false, 00:24:06.945 "zerocopy_threshold": 0, 00:24:06.945 "tls_version": 0, 00:24:06.945 "enable_ktls": false 00:24:06.945 } 00:24:06.945 } 00:24:06.945 ] 00:24:06.945 }, 00:24:06.945 { 00:24:06.945 "subsystem": "vmd", 00:24:06.945 "config": [] 00:24:06.945 }, 00:24:06.945 { 00:24:06.945 "subsystem": "accel", 00:24:06.945 "config": [ 00:24:06.945 { 00:24:06.945 "method": "accel_set_options", 00:24:06.945 "params": { 00:24:06.945 "small_cache_size": 128, 00:24:06.945 "large_cache_size": 16, 00:24:06.945 "task_count": 2048, 00:24:06.945 "sequence_count": 2048, 00:24:06.945 "buf_count": 2048 00:24:06.945 } 00:24:06.945 } 00:24:06.945 ] 00:24:06.945 }, 00:24:06.945 { 00:24:06.945 "subsystem": "bdev", 00:24:06.945 "config": [ 00:24:06.945 { 00:24:06.945 "method": "bdev_set_options", 00:24:06.945 "params": { 00:24:06.945 "bdev_io_pool_size": 65535, 00:24:06.945 "bdev_io_cache_size": 256, 00:24:06.945 "bdev_auto_examine": true, 00:24:06.945 "iobuf_small_cache_size": 128, 00:24:06.945 "iobuf_large_cache_size": 16 00:24:06.945 } 00:24:06.945 }, 00:24:06.945 { 00:24:06.945 "method": "bdev_raid_set_options", 00:24:06.946 "params": { 00:24:06.946 "process_window_size_kb": 1024, 00:24:06.946 "process_max_bandwidth_mb_sec": 0 00:24:06.946 } 00:24:06.946 }, 00:24:06.946 { 00:24:06.946 "method": "bdev_iscsi_set_options", 00:24:06.946 "params": { 00:24:06.946 "timeout_sec": 30 00:24:06.946 } 00:24:06.946 }, 00:24:06.946 { 00:24:06.946 "method": "bdev_nvme_set_options", 00:24:06.946 "params": { 00:24:06.946 "action_on_timeout": "none", 00:24:06.946 "timeout_us": 0, 00:24:06.946 "timeout_admin_us": 0, 00:24:06.946 "keep_alive_timeout_ms": 10000, 00:24:06.946 "arbitration_burst": 0, 00:24:06.946 "low_priority_weight": 0, 00:24:06.946 "medium_priority_weight": 0, 00:24:06.946 "high_priority_weight": 0, 00:24:06.946 "nvme_adminq_poll_period_us": 10000, 00:24:06.946 "nvme_ioq_poll_period_us": 0, 00:24:06.946 "io_queue_requests": 512, 00:24:06.946 "delay_cmd_submit": true, 00:24:06.946 "transport_retry_count": 4, 00:24:06.946 "bdev_retry_count": 3, 00:24:06.946 "transport_ack_timeout": 0, 00:24:06.946 "ctrlr_loss_timeout_sec": 0, 00:24:06.946 "reconnect_delay_sec": 0, 00:24:06.946 "fast_io_fail_timeout_sec": 0, 00:24:06.946 "disable_auto_failback": false, 00:24:06.946 "generate_uuids": false, 00:24:06.946 "transport_tos": 0, 00:24:06.946 "nvme_error_stat": false, 00:24:06.946 "rdma_srq_size": 0, 00:24:06.946 "io_path_stat": false, 00:24:06.946 "allow_accel_sequence": false, 00:24:06.946 "rdma_max_cq_size": 0, 00:24:06.946 "rdma_cm_event_timeout_ms": 0, 00:24:06.946 "dhchap_digests": [ 00:24:06.946 "sha256", 00:24:06.946 "sha384", 00:24:06.946 "sha512" 00:24:06.946 ], 00:24:06.946 "dhchap_dhgroups": [ 00:24:06.946 "null", 00:24:06.946 "ffdhe2048", 00:24:06.946 "ffdhe3072", 00:24:06.946 "ffdhe4096", 00:24:06.946 "ffdhe6144", 00:24:06.946 "ffdhe8192" 00:24:06.946 ] 00:24:06.946 } 00:24:06.946 }, 00:24:06.946 { 00:24:06.946 "method": "bdev_nvme_attach_controller", 00:24:06.946 "params": { 00:24:06.946 "name": "nvme0", 00:24:06.946 "trtype": "TCP", 00:24:06.946 "adrfam": "IPv4", 00:24:06.946 "traddr": "127.0.0.1", 00:24:06.946 "trsvcid": "4420", 00:24:06.946 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:06.946 "prchk_reftag": false, 00:24:06.946 "prchk_guard": false, 00:24:06.946 "ctrlr_loss_timeout_sec": 0, 00:24:06.946 "reconnect_delay_sec": 0, 00:24:06.946 "fast_io_fail_timeout_sec": 0, 00:24:06.946 "psk": "key0", 00:24:06.946 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:06.946 "hdgst": false, 00:24:06.946 "ddgst": false, 00:24:06.946 "multipath": "multipath" 00:24:06.946 } 00:24:06.946 }, 00:24:06.946 { 00:24:06.946 "method": "bdev_nvme_set_hotplug", 00:24:06.946 "params": { 00:24:06.946 "period_us": 100000, 00:24:06.946 "enable": false 00:24:06.946 } 00:24:06.946 }, 00:24:06.946 { 00:24:06.946 "method": "bdev_wait_for_examine" 00:24:06.946 } 00:24:06.946 ] 00:24:06.946 }, 00:24:06.946 { 00:24:06.946 "subsystem": "nbd", 00:24:06.946 "config": [] 00:24:06.946 } 00:24:06.946 ] 00:24:06.946 }' 00:24:06.946 02:05:17 keyring_file -- keyring/file.sh@115 -- # killprocess 99733 00:24:06.946 02:05:17 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 99733 ']' 00:24:06.946 02:05:17 keyring_file -- common/autotest_common.sh@958 -- # kill -0 99733 00:24:06.946 02:05:17 keyring_file -- common/autotest_common.sh@959 -- # uname 00:24:06.946 02:05:17 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:06.946 02:05:17 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99733 00:24:06.946 killing process with pid 99733 00:24:06.946 Received shutdown signal, test time was about 1.000000 seconds 00:24:06.946 00:24:06.946 Latency(us) 00:24:06.946 [2024-11-19T02:05:17.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.946 [2024-11-19T02:05:17.561Z] =================================================================================================================== 00:24:06.946 [2024-11-19T02:05:17.561Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:06.946 02:05:17 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:06.946 02:05:17 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:06.946 02:05:17 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99733' 00:24:06.946 02:05:17 keyring_file -- common/autotest_common.sh@973 -- # kill 99733 00:24:06.946 02:05:17 keyring_file -- common/autotest_common.sh@978 -- # wait 99733 00:24:07.206 02:05:17 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:24:07.206 02:05:17 keyring_file -- keyring/file.sh@118 -- # bperfpid=99969 00:24:07.206 02:05:17 keyring_file -- keyring/file.sh@120 -- # waitforlisten 99969 /var/tmp/bperf.sock 00:24:07.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:07.206 02:05:17 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 99969 ']' 00:24:07.206 02:05:17 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:07.206 02:05:17 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:24:07.206 "subsystems": [ 00:24:07.206 { 00:24:07.206 "subsystem": "keyring", 00:24:07.206 "config": [ 00:24:07.206 { 00:24:07.206 "method": "keyring_file_add_key", 00:24:07.206 "params": { 00:24:07.206 "name": "key0", 00:24:07.206 "path": "/tmp/tmp.Vg7FfbuyU0" 00:24:07.206 } 00:24:07.206 }, 00:24:07.206 { 00:24:07.206 "method": "keyring_file_add_key", 00:24:07.206 "params": { 00:24:07.206 "name": "key1", 00:24:07.206 "path": "/tmp/tmp.tuk20wEYzU" 00:24:07.206 } 00:24:07.206 } 00:24:07.206 ] 00:24:07.206 }, 00:24:07.206 { 00:24:07.206 "subsystem": "iobuf", 00:24:07.206 "config": [ 00:24:07.206 { 00:24:07.206 "method": "iobuf_set_options", 00:24:07.206 "params": { 00:24:07.206 "small_pool_count": 8192, 00:24:07.206 "large_pool_count": 1024, 00:24:07.206 "small_bufsize": 8192, 00:24:07.206 "large_bufsize": 135168, 00:24:07.206 "enable_numa": false 00:24:07.206 } 00:24:07.206 } 00:24:07.206 ] 00:24:07.206 }, 00:24:07.206 { 00:24:07.206 "subsystem": "sock", 00:24:07.206 "config": [ 00:24:07.206 { 00:24:07.206 "method": "sock_set_default_impl", 00:24:07.206 "params": { 00:24:07.206 "impl_name": "uring" 00:24:07.206 } 00:24:07.206 }, 00:24:07.206 { 00:24:07.206 "method": "sock_impl_set_options", 00:24:07.206 "params": { 00:24:07.206 "impl_name": "ssl", 00:24:07.206 "recv_buf_size": 4096, 00:24:07.206 "send_buf_size": 4096, 00:24:07.206 "enable_recv_pipe": true, 00:24:07.206 "enable_quickack": false, 00:24:07.206 "enable_placement_id": 0, 00:24:07.206 "enable_zerocopy_send_server": true, 00:24:07.206 "enable_zerocopy_send_client": false, 00:24:07.206 "zerocopy_threshold": 0, 00:24:07.206 "tls_version": 0, 00:24:07.206 "enable_ktls": false 00:24:07.206 } 00:24:07.206 }, 00:24:07.206 { 00:24:07.206 "method": "sock_impl_set_options", 00:24:07.206 "params": { 00:24:07.206 "impl_name": "posix", 00:24:07.206 "recv_buf_size": 2097152, 00:24:07.206 "send_buf_size": 2097152, 00:24:07.206 "enable_recv_pipe": true, 00:24:07.206 "enable_quickack": false, 00:24:07.206 "enable_placement_id": 0, 00:24:07.206 "enable_zerocopy_send_server": true, 00:24:07.206 "enable_zerocopy_send_client": false, 00:24:07.206 "zerocopy_threshold": 0, 00:24:07.206 "tls_version": 0, 00:24:07.206 "enable_ktls": false 00:24:07.206 } 00:24:07.206 }, 00:24:07.206 { 00:24:07.206 "method": "sock_impl_set_options", 00:24:07.206 "params": { 00:24:07.206 "impl_name": "uring", 00:24:07.206 "recv_buf_size": 2097152, 00:24:07.206 "send_buf_size": 2097152, 00:24:07.206 "enable_recv_pipe": true, 00:24:07.206 "enable_quickack": false, 00:24:07.206 "enable_placement_id": 0, 00:24:07.206 "enable_zerocopy_send_server": false, 00:24:07.206 "enable_zerocopy_send_client": false, 00:24:07.206 "zerocopy_threshold": 0, 00:24:07.206 "tls_version": 0, 00:24:07.206 "enable_ktls": false 00:24:07.206 } 00:24:07.206 } 00:24:07.206 ] 00:24:07.206 }, 00:24:07.206 { 00:24:07.206 "subsystem": "vmd", 00:24:07.206 "config": [] 00:24:07.206 }, 00:24:07.206 { 00:24:07.206 "subsystem": "accel", 00:24:07.206 "config": [ 00:24:07.207 { 00:24:07.207 "method": "accel_set_options", 00:24:07.207 "params": { 00:24:07.207 "small_cache_size": 128, 00:24:07.207 "large_cache_size": 16, 00:24:07.207 "task_count": 2048, 00:24:07.207 "sequence_count": 2048, 00:24:07.207 "buf_count": 2048 00:24:07.207 } 00:24:07.207 } 00:24:07.207 ] 00:24:07.207 }, 00:24:07.207 { 00:24:07.207 "subsystem": "bdev", 00:24:07.207 "config": [ 00:24:07.207 { 00:24:07.207 "method": "bdev_set_options", 00:24:07.207 "params": { 00:24:07.207 "bdev_io_pool_size": 65535, 00:24:07.207 "bdev_io_cache_size": 256, 00:24:07.207 "bdev_auto_examine": true, 00:24:07.207 "iobuf_small_cache_size": 128, 00:24:07.207 "iobuf_large_cache_size": 16 00:24:07.207 } 00:24:07.207 }, 00:24:07.207 { 00:24:07.207 "method": "bdev_raid_set_options", 00:24:07.207 "params": { 00:24:07.207 "process_window_size_kb": 1024, 00:24:07.207 "process_max_bandwidth_mb_sec": 0 00:24:07.207 } 00:24:07.207 }, 00:24:07.207 { 00:24:07.207 "method": "bdev_iscsi_set_options", 00:24:07.207 "params": { 00:24:07.207 "timeout_sec": 30 00:24:07.207 } 00:24:07.207 }, 00:24:07.207 { 00:24:07.207 "method": "bdev_nvme_set_options", 00:24:07.207 "params": { 00:24:07.207 "action_on_timeout": "none", 00:24:07.207 "timeout_us": 0, 00:24:07.207 "timeout_admin_us": 0, 00:24:07.207 "keep_alive_timeout_ms": 10000, 00:24:07.207 "arbitration_burst": 0, 00:24:07.207 "low_priority_weight": 0, 00:24:07.207 "medium_priority_weight": 0, 00:24:07.207 "high_priority_weight": 0, 00:24:07.207 "nvme_adminq_poll_period_us": 10000, 00:24:07.207 "nvme_ioq_poll_period_us": 0, 00:24:07.207 "io_queue_requests": 512, 00:24:07.207 "delay_cmd_submit": true, 00:24:07.207 "transport_retry_count": 4, 00:24:07.207 "bdev_retry_count": 3, 00:24:07.207 "transport_ack_timeout": 0, 00:24:07.207 "ctrlr_loss_timeout_sec": 0, 00:24:07.207 "reconnect_delay_sec": 0, 00:24:07.207 "fast_io_fail_timeout_sec": 0, 00:24:07.207 "disable_auto_failback": false, 00:24:07.207 "generate_uuids": false, 00:24:07.207 "transport_tos": 0, 00:24:07.207 "nvme_error_stat": false, 00:24:07.207 "rdma_srq_size": 0, 00:24:07.207 "io_path_stat": false, 00:24:07.207 "allow_accel_sequence": false, 00:24:07.207 "rdma_max_cq_size": 0, 00:24:07.207 "rdma_cm_event_timeout_ms": 0, 00:24:07.207 "dhchap_digests": [ 00:24:07.207 "sha256", 00:24:07.207 "sha384", 00:24:07.207 "sha512" 00:24:07.207 ], 00:24:07.207 "dhchap_dhgroups": [ 00:24:07.207 "null", 00:24:07.207 "ffdhe2048", 00:24:07.207 "ffdhe3072", 00:24:07.207 "ffdhe4096", 00:24:07.207 "ffdhe6144", 00:24:07.207 "ffdhe8192" 00:24:07.207 ] 00:24:07.207 } 00:24:07.207 }, 00:24:07.207 { 00:24:07.207 "method": "bdev_nvme_attach_controller", 00:24:07.207 "params": { 00:24:07.207 "name": "nvme0", 00:24:07.207 "trtype": "TCP", 00:24:07.207 "adrfam": "IPv4", 00:24:07.207 "traddr": "127.0.0.1", 00:24:07.207 "trsvcid": "4420", 00:24:07.207 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:07.207 "prchk_reftag": false, 00:24:07.207 "prchk_guard": false, 00:24:07.207 "ctrlr_loss_timeout_sec": 0, 00:24:07.207 "reconnect_delay_sec": 0, 00:24:07.207 "fast_io_fail_timeout_sec": 0, 00:24:07.207 "psk": "key0", 00:24:07.207 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:07.207 "hdgst": false, 00:24:07.207 "ddgst": false, 00:24:07.207 "multipath": "multipath" 00:24:07.207 } 00:24:07.207 }, 00:24:07.207 { 00:24:07.207 "method": "bdev_nvme_set_hotplug", 00:24:07.207 "params": { 00:24:07.207 "period_us": 100000, 00:24:07.207 "enable": false 00:24:07.207 } 00:24:07.207 }, 00:24:07.207 { 00:24:07.207 "method": "bdev_wait_for_examine" 00:24:07.207 } 00:24:07.207 ] 00:24:07.207 }, 00:24:07.207 { 00:24:07.207 "subsystem": "nbd", 00:24:07.207 "config": [] 00:24:07.207 } 00:24:07.207 ] 00:24:07.207 }' 00:24:07.207 02:05:17 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:07.207 02:05:17 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:07.207 02:05:17 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:07.207 02:05:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:07.207 [2024-11-19 02:05:17.707428] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:24:07.207 [2024-11-19 02:05:17.707739] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99969 ] 00:24:07.466 [2024-11-19 02:05:17.845156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.466 [2024-11-19 02:05:17.864413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:07.466 [2024-11-19 02:05:17.973419] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:07.466 [2024-11-19 02:05:18.009682] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:08.034 02:05:18 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:08.034 02:05:18 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:24:08.034 02:05:18 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:24:08.034 02:05:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:08.034 02:05:18 keyring_file -- keyring/file.sh@121 -- # jq length 00:24:08.293 02:05:18 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:24:08.293 02:05:18 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:24:08.293 02:05:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:08.293 02:05:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:08.293 02:05:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:08.293 02:05:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:08.293 02:05:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:08.552 02:05:19 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:24:08.552 02:05:19 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:24:08.552 02:05:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:08.552 02:05:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:08.552 02:05:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:08.552 02:05:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:08.552 02:05:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:08.811 02:05:19 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:24:08.811 02:05:19 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:24:08.811 02:05:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:24:08.811 02:05:19 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:24:09.070 02:05:19 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:24:09.070 02:05:19 keyring_file -- keyring/file.sh@1 -- # cleanup 00:24:09.070 02:05:19 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.Vg7FfbuyU0 /tmp/tmp.tuk20wEYzU 00:24:09.070 02:05:19 keyring_file -- keyring/file.sh@20 -- # killprocess 99969 00:24:09.070 02:05:19 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 99969 ']' 00:24:09.070 02:05:19 keyring_file -- common/autotest_common.sh@958 -- # kill -0 99969 00:24:09.070 02:05:19 keyring_file -- common/autotest_common.sh@959 -- # uname 00:24:09.070 02:05:19 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:09.070 02:05:19 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99969 00:24:09.070 killing process with pid 99969 00:24:09.070 Received shutdown signal, test time was about 1.000000 seconds 00:24:09.070 00:24:09.070 Latency(us) 00:24:09.070 [2024-11-19T02:05:19.685Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:09.070 [2024-11-19T02:05:19.685Z] =================================================================================================================== 00:24:09.070 [2024-11-19T02:05:19.685Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:09.070 02:05:19 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:09.070 02:05:19 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:09.070 02:05:19 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99969' 00:24:09.070 02:05:19 keyring_file -- common/autotest_common.sh@973 -- # kill 99969 00:24:09.070 02:05:19 keyring_file -- common/autotest_common.sh@978 -- # wait 99969 00:24:09.330 02:05:19 keyring_file -- keyring/file.sh@21 -- # killprocess 99722 00:24:09.330 02:05:19 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 99722 ']' 00:24:09.330 02:05:19 keyring_file -- common/autotest_common.sh@958 -- # kill -0 99722 00:24:09.330 02:05:19 keyring_file -- common/autotest_common.sh@959 -- # uname 00:24:09.330 02:05:19 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:09.330 02:05:19 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99722 00:24:09.330 killing process with pid 99722 00:24:09.330 02:05:19 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:09.330 02:05:19 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:09.330 02:05:19 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99722' 00:24:09.330 02:05:19 keyring_file -- common/autotest_common.sh@973 -- # kill 99722 00:24:09.330 02:05:19 keyring_file -- common/autotest_common.sh@978 -- # wait 99722 00:24:09.589 00:24:09.589 real 0m13.802s 00:24:09.589 user 0m35.650s 00:24:09.589 sys 0m2.567s 00:24:09.589 02:05:19 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:09.589 ************************************ 00:24:09.589 END TEST keyring_file 00:24:09.589 ************************************ 00:24:09.589 02:05:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:09.589 02:05:20 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:24:09.589 02:05:20 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:24:09.589 02:05:20 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:09.589 02:05:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:09.589 02:05:20 -- common/autotest_common.sh@10 -- # set +x 00:24:09.589 ************************************ 00:24:09.589 START TEST keyring_linux 00:24:09.589 ************************************ 00:24:09.589 02:05:20 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:24:09.589 Joined session keyring: 314527378 00:24:09.589 * Looking for test storage... 00:24:09.589 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:09.589 02:05:20 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:09.589 02:05:20 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:24:09.589 02:05:20 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:09.849 02:05:20 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:09.849 02:05:20 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:09.849 02:05:20 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:09.850 02:05:20 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:09.850 02:05:20 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:24:09.850 02:05:20 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:24:09.850 02:05:20 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:24:09.850 02:05:20 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:24:09.850 02:05:20 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:24:09.850 02:05:20 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:24:09.850 02:05:20 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:24:09.850 02:05:20 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:09.850 02:05:20 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:24:09.850 02:05:20 keyring_linux -- scripts/common.sh@345 -- # : 1 00:24:09.850 02:05:20 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:09.850 02:05:20 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:09.850 02:05:20 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:24:09.850 02:05:20 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:24:09.850 02:05:20 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:09.850 02:05:20 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:24:09.850 02:05:20 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:24:09.850 02:05:20 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:24:09.850 02:05:20 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:24:09.850 02:05:20 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:09.850 02:05:20 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:24:09.850 02:05:20 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:24:09.850 02:05:20 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:09.850 02:05:20 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:09.850 02:05:20 keyring_linux -- scripts/common.sh@368 -- # return 0 00:24:09.850 02:05:20 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:09.850 02:05:20 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:09.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.850 --rc genhtml_branch_coverage=1 00:24:09.850 --rc genhtml_function_coverage=1 00:24:09.850 --rc genhtml_legend=1 00:24:09.850 --rc geninfo_all_blocks=1 00:24:09.850 --rc geninfo_unexecuted_blocks=1 00:24:09.850 00:24:09.850 ' 00:24:09.850 02:05:20 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:09.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.850 --rc genhtml_branch_coverage=1 00:24:09.850 --rc genhtml_function_coverage=1 00:24:09.850 --rc genhtml_legend=1 00:24:09.850 --rc geninfo_all_blocks=1 00:24:09.850 --rc geninfo_unexecuted_blocks=1 00:24:09.850 00:24:09.850 ' 00:24:09.850 02:05:20 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:09.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.850 --rc genhtml_branch_coverage=1 00:24:09.850 --rc genhtml_function_coverage=1 00:24:09.850 --rc genhtml_legend=1 00:24:09.850 --rc geninfo_all_blocks=1 00:24:09.850 --rc geninfo_unexecuted_blocks=1 00:24:09.850 00:24:09.850 ' 00:24:09.850 02:05:20 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:09.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.850 --rc genhtml_branch_coverage=1 00:24:09.850 --rc genhtml_function_coverage=1 00:24:09.850 --rc genhtml_legend=1 00:24:09.850 --rc geninfo_all_blocks=1 00:24:09.850 --rc geninfo_unexecuted_blocks=1 00:24:09.850 00:24:09.850 ' 00:24:09.850 02:05:20 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:09.850 02:05:20 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:09.850 02:05:20 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:24:09.850 02:05:20 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:09.850 02:05:20 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:09.850 02:05:20 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:09.850 02:05:20 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:09.850 02:05:20 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:09.850 02:05:20 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:09.850 02:05:20 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:09.850 02:05:20 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:09.850 02:05:20 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:09.850 02:05:20 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:09.850 02:05:20 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:24:09.850 02:05:20 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=7cdc77f7-6c10-48d3-83fa-703a290bdf89 00:24:09.850 02:05:20 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:09.850 02:05:20 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:09.850 02:05:20 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:09.850 02:05:20 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:09.850 02:05:20 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:09.850 02:05:20 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:24:09.850 02:05:20 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:09.850 02:05:20 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:09.850 02:05:20 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:09.850 02:05:20 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.850 02:05:20 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.850 02:05:20 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.850 02:05:20 keyring_linux -- paths/export.sh@5 -- # export PATH 00:24:09.850 02:05:20 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.850 02:05:20 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:24:09.850 02:05:20 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:09.850 02:05:20 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:09.850 02:05:20 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:09.850 02:05:20 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:09.850 02:05:20 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:09.850 02:05:20 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:09.850 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:09.850 02:05:20 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:09.850 02:05:20 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:09.850 02:05:20 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:09.850 02:05:20 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:09.850 02:05:20 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:09.850 02:05:20 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:09.850 02:05:20 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:24:09.850 02:05:20 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:24:09.850 02:05:20 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:24:09.850 02:05:20 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:24:09.850 02:05:20 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:24:09.850 02:05:20 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:24:09.850 02:05:20 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:09.850 02:05:20 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:24:09.850 02:05:20 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:24:09.850 02:05:20 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:09.850 02:05:20 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:09.850 02:05:20 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:24:09.850 02:05:20 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:09.850 02:05:20 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:24:09.850 02:05:20 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:24:09.850 02:05:20 keyring_linux -- nvmf/common.sh@733 -- # python - 00:24:09.850 02:05:20 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:24:09.850 /tmp/:spdk-test:key0 00:24:09.850 02:05:20 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:24:09.850 02:05:20 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:24:09.850 02:05:20 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:24:09.850 02:05:20 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:24:09.850 02:05:20 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:09.850 02:05:20 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:24:09.850 02:05:20 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:24:09.851 02:05:20 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:09.851 02:05:20 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:09.851 02:05:20 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:24:09.851 02:05:20 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:09.851 02:05:20 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:24:09.851 02:05:20 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:24:09.851 02:05:20 keyring_linux -- nvmf/common.sh@733 -- # python - 00:24:09.851 02:05:20 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:24:09.851 /tmp/:spdk-test:key1 00:24:09.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.851 02:05:20 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:24:09.851 02:05:20 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=100092 00:24:09.851 02:05:20 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:09.851 02:05:20 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 100092 00:24:09.851 02:05:20 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 100092 ']' 00:24:09.851 02:05:20 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.851 02:05:20 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:09.851 02:05:20 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.851 02:05:20 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:09.851 02:05:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:09.851 [2024-11-19 02:05:20.423090] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:24:09.851 [2024-11-19 02:05:20.423369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100092 ] 00:24:10.110 [2024-11-19 02:05:20.565803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.110 [2024-11-19 02:05:20.585007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.110 [2024-11-19 02:05:20.619273] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:10.110 02:05:20 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:10.110 02:05:20 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:24:10.110 02:05:20 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:24:10.110 02:05:20 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.110 02:05:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:10.369 [2024-11-19 02:05:20.732683] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.369 null0 00:24:10.369 [2024-11-19 02:05:20.764648] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:10.369 [2024-11-19 02:05:20.764972] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:10.369 02:05:20 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.369 02:05:20 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:24:10.369 481133080 00:24:10.369 02:05:20 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:24:10.369 624001610 00:24:10.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:10.369 02:05:20 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=100097 00:24:10.369 02:05:20 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:24:10.369 02:05:20 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 100097 /var/tmp/bperf.sock 00:24:10.369 02:05:20 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 100097 ']' 00:24:10.369 02:05:20 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:10.369 02:05:20 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:10.369 02:05:20 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:10.369 02:05:20 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:10.369 02:05:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:10.369 [2024-11-19 02:05:20.847115] Starting SPDK v25.01-pre git sha1 d47eb51c9 / DPDK 22.11.4 initialization... 00:24:10.369 [2024-11-19 02:05:20.847371] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100097 ] 00:24:10.628 [2024-11-19 02:05:20.998127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.628 [2024-11-19 02:05:21.022708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.628 02:05:21 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:10.628 02:05:21 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:24:10.628 02:05:21 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:24:10.628 02:05:21 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:24:10.888 02:05:21 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:24:10.888 02:05:21 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:11.147 [2024-11-19 02:05:21.589895] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:11.147 02:05:21 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:24:11.147 02:05:21 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:24:11.406 [2024-11-19 02:05:21.821298] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:11.406 nvme0n1 00:24:11.406 02:05:21 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:24:11.406 02:05:21 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:24:11.406 02:05:21 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:24:11.406 02:05:21 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:24:11.406 02:05:21 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:11.406 02:05:21 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:24:11.665 02:05:22 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:24:11.665 02:05:22 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:24:11.665 02:05:22 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:24:11.665 02:05:22 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:24:11.665 02:05:22 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:11.665 02:05:22 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:11.665 02:05:22 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:24:11.924 02:05:22 keyring_linux -- keyring/linux.sh@25 -- # sn=481133080 00:24:11.924 02:05:22 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:24:11.924 02:05:22 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:24:11.924 02:05:22 keyring_linux -- keyring/linux.sh@26 -- # [[ 481133080 == \4\8\1\1\3\3\0\8\0 ]] 00:24:11.924 02:05:22 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 481133080 00:24:11.924 02:05:22 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:24:11.924 02:05:22 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:12.181 Running I/O for 1 seconds... 00:24:13.117 13232.00 IOPS, 51.69 MiB/s 00:24:13.117 Latency(us) 00:24:13.117 [2024-11-19T02:05:23.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.117 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:13.117 nvme0n1 : 1.01 13237.49 51.71 0.00 0.00 9621.64 6076.97 14894.55 00:24:13.117 [2024-11-19T02:05:23.732Z] =================================================================================================================== 00:24:13.117 [2024-11-19T02:05:23.732Z] Total : 13237.49 51.71 0.00 0.00 9621.64 6076.97 14894.55 00:24:13.117 { 00:24:13.117 "results": [ 00:24:13.117 { 00:24:13.117 "job": "nvme0n1", 00:24:13.117 "core_mask": "0x2", 00:24:13.117 "workload": "randread", 00:24:13.117 "status": "finished", 00:24:13.117 "queue_depth": 128, 00:24:13.117 "io_size": 4096, 00:24:13.117 "runtime": 1.009255, 00:24:13.117 "iops": 13237.487057284829, 00:24:13.117 "mibps": 51.70893381751886, 00:24:13.117 "io_failed": 0, 00:24:13.117 "io_timeout": 0, 00:24:13.117 "avg_latency_us": 9621.64001306478, 00:24:13.117 "min_latency_us": 6076.9745454545455, 00:24:13.117 "max_latency_us": 14894.545454545454 00:24:13.117 } 00:24:13.117 ], 00:24:13.117 "core_count": 1 00:24:13.117 } 00:24:13.117 02:05:23 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:13.117 02:05:23 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:13.377 02:05:23 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:24:13.377 02:05:23 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:24:13.377 02:05:23 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:24:13.377 02:05:23 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:24:13.377 02:05:23 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:13.377 02:05:23 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:24:13.636 02:05:24 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:24:13.636 02:05:24 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:24:13.636 02:05:24 keyring_linux -- keyring/linux.sh@23 -- # return 00:24:13.636 02:05:24 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:13.636 02:05:24 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:24:13.636 02:05:24 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:13.636 02:05:24 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:24:13.636 02:05:24 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:13.636 02:05:24 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:24:13.636 02:05:24 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:13.636 02:05:24 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:13.636 02:05:24 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:24:13.895 [2024-11-19 02:05:24.369371] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:13.895 [2024-11-19 02:05:24.370075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d5ed20 (107): Transport endpoint is not connected 00:24:13.895 [2024-11-19 02:05:24.371060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d5ed20 (9): Bad file descriptor 00:24:13.895 [2024-11-19 02:05:24.372057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:24:13.895 [2024-11-19 02:05:24.372083] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:24:13.895 [2024-11-19 02:05:24.372094] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:24:13.896 [2024-11-19 02:05:24.372104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:24:13.896 request: 00:24:13.896 { 00:24:13.896 "name": "nvme0", 00:24:13.896 "trtype": "tcp", 00:24:13.896 "traddr": "127.0.0.1", 00:24:13.896 "adrfam": "ipv4", 00:24:13.896 "trsvcid": "4420", 00:24:13.896 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:13.896 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:13.896 "prchk_reftag": false, 00:24:13.896 "prchk_guard": false, 00:24:13.896 "hdgst": false, 00:24:13.896 "ddgst": false, 00:24:13.896 "psk": ":spdk-test:key1", 00:24:13.896 "allow_unrecognized_csi": false, 00:24:13.896 "method": "bdev_nvme_attach_controller", 00:24:13.896 "req_id": 1 00:24:13.896 } 00:24:13.896 Got JSON-RPC error response 00:24:13.896 response: 00:24:13.896 { 00:24:13.896 "code": -5, 00:24:13.896 "message": "Input/output error" 00:24:13.896 } 00:24:13.896 02:05:24 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:24:13.896 02:05:24 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:13.896 02:05:24 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:13.896 02:05:24 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:13.896 02:05:24 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:24:13.896 02:05:24 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:24:13.896 02:05:24 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:24:13.896 02:05:24 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:24:13.896 02:05:24 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:24:13.896 02:05:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:24:13.896 02:05:24 keyring_linux -- keyring/linux.sh@33 -- # sn=481133080 00:24:13.896 02:05:24 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 481133080 00:24:13.896 1 links removed 00:24:13.896 02:05:24 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:24:13.896 02:05:24 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:24:13.896 02:05:24 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:24:13.896 02:05:24 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:24:13.896 02:05:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:24:13.896 02:05:24 keyring_linux -- keyring/linux.sh@33 -- # sn=624001610 00:24:13.896 02:05:24 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 624001610 00:24:13.896 1 links removed 00:24:13.896 02:05:24 keyring_linux -- keyring/linux.sh@41 -- # killprocess 100097 00:24:13.896 02:05:24 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 100097 ']' 00:24:13.896 02:05:24 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 100097 00:24:13.896 02:05:24 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:24:13.896 02:05:24 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:13.896 02:05:24 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100097 00:24:13.896 killing process with pid 100097 00:24:13.896 Received shutdown signal, test time was about 1.000000 seconds 00:24:13.896 00:24:13.896 Latency(us) 00:24:13.896 [2024-11-19T02:05:24.511Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.896 [2024-11-19T02:05:24.511Z] =================================================================================================================== 00:24:13.896 [2024-11-19T02:05:24.511Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:13.896 02:05:24 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:13.896 02:05:24 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:13.896 02:05:24 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100097' 00:24:13.896 02:05:24 keyring_linux -- common/autotest_common.sh@973 -- # kill 100097 00:24:13.896 02:05:24 keyring_linux -- common/autotest_common.sh@978 -- # wait 100097 00:24:14.155 02:05:24 keyring_linux -- keyring/linux.sh@42 -- # killprocess 100092 00:24:14.155 02:05:24 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 100092 ']' 00:24:14.155 02:05:24 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 100092 00:24:14.155 02:05:24 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:24:14.155 02:05:24 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:14.155 02:05:24 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100092 00:24:14.155 killing process with pid 100092 00:24:14.155 02:05:24 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:14.155 02:05:24 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:14.155 02:05:24 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100092' 00:24:14.155 02:05:24 keyring_linux -- common/autotest_common.sh@973 -- # kill 100092 00:24:14.155 02:05:24 keyring_linux -- common/autotest_common.sh@978 -- # wait 100092 00:24:14.415 ************************************ 00:24:14.415 END TEST keyring_linux 00:24:14.415 ************************************ 00:24:14.415 00:24:14.415 real 0m4.729s 00:24:14.415 user 0m9.700s 00:24:14.415 sys 0m1.301s 00:24:14.415 02:05:24 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:14.415 02:05:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:24:14.415 02:05:24 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:24:14.415 02:05:24 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:24:14.415 02:05:24 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:24:14.415 02:05:24 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:24:14.415 02:05:24 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:24:14.415 02:05:24 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:24:14.415 02:05:24 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:24:14.415 02:05:24 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:24:14.415 02:05:24 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:24:14.415 02:05:24 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:24:14.415 02:05:24 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:24:14.415 02:05:24 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:24:14.415 02:05:24 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:24:14.415 02:05:24 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:24:14.415 02:05:24 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:24:14.415 02:05:24 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:24:14.415 02:05:24 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:24:14.415 02:05:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:14.415 02:05:24 -- common/autotest_common.sh@10 -- # set +x 00:24:14.415 02:05:24 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:24:14.415 02:05:24 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:24:14.415 02:05:24 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:24:14.415 02:05:24 -- common/autotest_common.sh@10 -- # set +x 00:24:16.320 INFO: APP EXITING 00:24:16.321 INFO: killing all VMs 00:24:16.321 INFO: killing vhost app 00:24:16.321 INFO: EXIT DONE 00:24:16.889 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:16.889 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:24:16.889 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:24:17.825 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:17.825 Cleaning 00:24:17.825 Removing: /var/run/dpdk/spdk0/config 00:24:17.825 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:24:17.825 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:24:17.825 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:24:17.825 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:24:17.825 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:24:17.825 Removing: /var/run/dpdk/spdk0/hugepage_info 00:24:17.825 Removing: /var/run/dpdk/spdk1/config 00:24:17.825 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:24:17.825 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:24:17.825 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:24:17.825 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:24:17.825 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:24:17.825 Removing: /var/run/dpdk/spdk1/hugepage_info 00:24:17.825 Removing: /var/run/dpdk/spdk2/config 00:24:17.825 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:24:17.825 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:24:17.825 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:24:17.825 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:24:17.825 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:24:17.825 Removing: /var/run/dpdk/spdk2/hugepage_info 00:24:17.825 Removing: /var/run/dpdk/spdk3/config 00:24:17.826 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:24:17.826 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:24:17.826 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:24:17.826 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:24:17.826 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:24:17.826 Removing: /var/run/dpdk/spdk3/hugepage_info 00:24:17.826 Removing: /var/run/dpdk/spdk4/config 00:24:17.826 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:24:17.826 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:24:17.826 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:24:17.826 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:24:17.826 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:24:17.826 Removing: /var/run/dpdk/spdk4/hugepage_info 00:24:17.826 Removing: /dev/shm/nvmf_trace.0 00:24:17.826 Removing: /dev/shm/spdk_tgt_trace.pid69004 00:24:17.826 Removing: /var/run/dpdk/spdk0 00:24:17.826 Removing: /var/run/dpdk/spdk1 00:24:17.826 Removing: /var/run/dpdk/spdk2 00:24:17.826 Removing: /var/run/dpdk/spdk3 00:24:17.826 Removing: /var/run/dpdk/spdk4 00:24:17.826 Removing: /var/run/dpdk/spdk_pid100092 00:24:17.826 Removing: /var/run/dpdk/spdk_pid100097 00:24:17.826 Removing: /var/run/dpdk/spdk_pid68857 00:24:17.826 Removing: /var/run/dpdk/spdk_pid69004 00:24:17.826 Removing: /var/run/dpdk/spdk_pid69197 00:24:17.826 Removing: /var/run/dpdk/spdk_pid69284 00:24:17.826 Removing: /var/run/dpdk/spdk_pid69298 00:24:17.826 Removing: /var/run/dpdk/spdk_pid69408 00:24:17.826 Removing: /var/run/dpdk/spdk_pid69418 00:24:17.826 Removing: /var/run/dpdk/spdk_pid69552 00:24:17.826 Removing: /var/run/dpdk/spdk_pid69748 00:24:17.826 Removing: /var/run/dpdk/spdk_pid69896 00:24:17.826 Removing: /var/run/dpdk/spdk_pid69969 00:24:17.826 Removing: /var/run/dpdk/spdk_pid70045 00:24:17.826 Removing: /var/run/dpdk/spdk_pid70131 00:24:17.826 Removing: /var/run/dpdk/spdk_pid70203 00:24:17.826 Removing: /var/run/dpdk/spdk_pid70242 00:24:17.826 Removing: /var/run/dpdk/spdk_pid70272 00:24:17.826 Removing: /var/run/dpdk/spdk_pid70341 00:24:17.826 Removing: /var/run/dpdk/spdk_pid70417 00:24:17.826 Removing: /var/run/dpdk/spdk_pid70852 00:24:17.826 Removing: /var/run/dpdk/spdk_pid70897 00:24:17.826 Removing: /var/run/dpdk/spdk_pid70936 00:24:17.826 Removing: /var/run/dpdk/spdk_pid70944 00:24:17.826 Removing: /var/run/dpdk/spdk_pid70998 00:24:17.826 Removing: /var/run/dpdk/spdk_pid71001 00:24:17.826 Removing: /var/run/dpdk/spdk_pid71068 00:24:17.826 Removing: /var/run/dpdk/spdk_pid71084 00:24:17.826 Removing: /var/run/dpdk/spdk_pid71124 00:24:17.826 Removing: /var/run/dpdk/spdk_pid71129 00:24:17.826 Removing: /var/run/dpdk/spdk_pid71175 00:24:17.826 Removing: /var/run/dpdk/spdk_pid71193 00:24:17.826 Removing: /var/run/dpdk/spdk_pid71312 00:24:17.826 Removing: /var/run/dpdk/spdk_pid71348 00:24:17.826 Removing: /var/run/dpdk/spdk_pid71430 00:24:17.826 Removing: /var/run/dpdk/spdk_pid71751 00:24:17.826 Removing: /var/run/dpdk/spdk_pid71769 00:24:17.826 Removing: /var/run/dpdk/spdk_pid71800 00:24:17.826 Removing: /var/run/dpdk/spdk_pid71813 00:24:17.826 Removing: /var/run/dpdk/spdk_pid71829 00:24:17.826 Removing: /var/run/dpdk/spdk_pid71848 00:24:18.085 Removing: /var/run/dpdk/spdk_pid71856 00:24:18.085 Removing: /var/run/dpdk/spdk_pid71871 00:24:18.085 Removing: /var/run/dpdk/spdk_pid71890 00:24:18.085 Removing: /var/run/dpdk/spdk_pid71904 00:24:18.085 Removing: /var/run/dpdk/spdk_pid71919 00:24:18.085 Removing: /var/run/dpdk/spdk_pid71938 00:24:18.085 Removing: /var/run/dpdk/spdk_pid71952 00:24:18.085 Removing: /var/run/dpdk/spdk_pid71966 00:24:18.085 Removing: /var/run/dpdk/spdk_pid71981 00:24:18.085 Removing: /var/run/dpdk/spdk_pid71994 00:24:18.085 Removing: /var/run/dpdk/spdk_pid72010 00:24:18.085 Removing: /var/run/dpdk/spdk_pid72029 00:24:18.085 Removing: /var/run/dpdk/spdk_pid72042 00:24:18.085 Removing: /var/run/dpdk/spdk_pid72058 00:24:18.085 Removing: /var/run/dpdk/spdk_pid72083 00:24:18.085 Removing: /var/run/dpdk/spdk_pid72102 00:24:18.085 Removing: /var/run/dpdk/spdk_pid72126 00:24:18.085 Removing: /var/run/dpdk/spdk_pid72198 00:24:18.085 Removing: /var/run/dpdk/spdk_pid72221 00:24:18.085 Removing: /var/run/dpdk/spdk_pid72230 00:24:18.085 Removing: /var/run/dpdk/spdk_pid72259 00:24:18.085 Removing: /var/run/dpdk/spdk_pid72267 00:24:18.085 Removing: /var/run/dpdk/spdk_pid72276 00:24:18.085 Removing: /var/run/dpdk/spdk_pid72313 00:24:18.085 Removing: /var/run/dpdk/spdk_pid72326 00:24:18.085 Removing: /var/run/dpdk/spdk_pid72355 00:24:18.085 Removing: /var/run/dpdk/spdk_pid72359 00:24:18.085 Removing: /var/run/dpdk/spdk_pid72368 00:24:18.085 Removing: /var/run/dpdk/spdk_pid72378 00:24:18.085 Removing: /var/run/dpdk/spdk_pid72382 00:24:18.085 Removing: /var/run/dpdk/spdk_pid72391 00:24:18.085 Removing: /var/run/dpdk/spdk_pid72401 00:24:18.085 Removing: /var/run/dpdk/spdk_pid72405 00:24:18.085 Removing: /var/run/dpdk/spdk_pid72439 00:24:18.085 Removing: /var/run/dpdk/spdk_pid72460 00:24:18.085 Removing: /var/run/dpdk/spdk_pid72465 00:24:18.085 Removing: /var/run/dpdk/spdk_pid72498 00:24:18.085 Removing: /var/run/dpdk/spdk_pid72502 00:24:18.085 Removing: /var/run/dpdk/spdk_pid72509 00:24:18.085 Removing: /var/run/dpdk/spdk_pid72550 00:24:18.085 Removing: /var/run/dpdk/spdk_pid72556 00:24:18.085 Removing: /var/run/dpdk/spdk_pid72588 00:24:18.086 Removing: /var/run/dpdk/spdk_pid72590 00:24:18.086 Removing: /var/run/dpdk/spdk_pid72603 00:24:18.086 Removing: /var/run/dpdk/spdk_pid72605 00:24:18.086 Removing: /var/run/dpdk/spdk_pid72607 00:24:18.086 Removing: /var/run/dpdk/spdk_pid72620 00:24:18.086 Removing: /var/run/dpdk/spdk_pid72624 00:24:18.086 Removing: /var/run/dpdk/spdk_pid72626 00:24:18.086 Removing: /var/run/dpdk/spdk_pid72708 00:24:18.086 Removing: /var/run/dpdk/spdk_pid72750 00:24:18.086 Removing: /var/run/dpdk/spdk_pid72857 00:24:18.086 Removing: /var/run/dpdk/spdk_pid72885 00:24:18.086 Removing: /var/run/dpdk/spdk_pid72930 00:24:18.086 Removing: /var/run/dpdk/spdk_pid72950 00:24:18.086 Removing: /var/run/dpdk/spdk_pid72961 00:24:18.086 Removing: /var/run/dpdk/spdk_pid72981 00:24:18.086 Removing: /var/run/dpdk/spdk_pid73010 00:24:18.086 Removing: /var/run/dpdk/spdk_pid73028 00:24:18.086 Removing: /var/run/dpdk/spdk_pid73105 00:24:18.086 Removing: /var/run/dpdk/spdk_pid73117 00:24:18.086 Removing: /var/run/dpdk/spdk_pid73155 00:24:18.086 Removing: /var/run/dpdk/spdk_pid73217 00:24:18.086 Removing: /var/run/dpdk/spdk_pid73265 00:24:18.086 Removing: /var/run/dpdk/spdk_pid73290 00:24:18.086 Removing: /var/run/dpdk/spdk_pid73383 00:24:18.086 Removing: /var/run/dpdk/spdk_pid73431 00:24:18.086 Removing: /var/run/dpdk/spdk_pid73458 00:24:18.086 Removing: /var/run/dpdk/spdk_pid73690 00:24:18.086 Removing: /var/run/dpdk/spdk_pid73776 00:24:18.086 Removing: /var/run/dpdk/spdk_pid73805 00:24:18.086 Removing: /var/run/dpdk/spdk_pid73834 00:24:18.086 Removing: /var/run/dpdk/spdk_pid73868 00:24:18.086 Removing: /var/run/dpdk/spdk_pid73896 00:24:18.086 Removing: /var/run/dpdk/spdk_pid73935 00:24:18.086 Removing: /var/run/dpdk/spdk_pid73961 00:24:18.086 Removing: /var/run/dpdk/spdk_pid74366 00:24:18.086 Removing: /var/run/dpdk/spdk_pid74406 00:24:18.086 Removing: /var/run/dpdk/spdk_pid74745 00:24:18.086 Removing: /var/run/dpdk/spdk_pid75204 00:24:18.086 Removing: /var/run/dpdk/spdk_pid75468 00:24:18.345 Removing: /var/run/dpdk/spdk_pid76299 00:24:18.345 Removing: /var/run/dpdk/spdk_pid77217 00:24:18.345 Removing: /var/run/dpdk/spdk_pid77334 00:24:18.345 Removing: /var/run/dpdk/spdk_pid77402 00:24:18.345 Removing: /var/run/dpdk/spdk_pid78795 00:24:18.345 Removing: /var/run/dpdk/spdk_pid79110 00:24:18.345 Removing: /var/run/dpdk/spdk_pid82801 00:24:18.345 Removing: /var/run/dpdk/spdk_pid83158 00:24:18.345 Removing: /var/run/dpdk/spdk_pid83267 00:24:18.345 Removing: /var/run/dpdk/spdk_pid83394 00:24:18.345 Removing: /var/run/dpdk/spdk_pid83415 00:24:18.345 Removing: /var/run/dpdk/spdk_pid83436 00:24:18.345 Removing: /var/run/dpdk/spdk_pid83456 00:24:18.345 Removing: /var/run/dpdk/spdk_pid83542 00:24:18.345 Removing: /var/run/dpdk/spdk_pid83670 00:24:18.345 Removing: /var/run/dpdk/spdk_pid83807 00:24:18.345 Removing: /var/run/dpdk/spdk_pid83880 00:24:18.345 Removing: /var/run/dpdk/spdk_pid84063 00:24:18.345 Removing: /var/run/dpdk/spdk_pid84130 00:24:18.345 Removing: /var/run/dpdk/spdk_pid84211 00:24:18.345 Removing: /var/run/dpdk/spdk_pid84559 00:24:18.345 Removing: /var/run/dpdk/spdk_pid84978 00:24:18.345 Removing: /var/run/dpdk/spdk_pid84979 00:24:18.345 Removing: /var/run/dpdk/spdk_pid84980 00:24:18.345 Removing: /var/run/dpdk/spdk_pid85237 00:24:18.345 Removing: /var/run/dpdk/spdk_pid85479 00:24:18.345 Removing: /var/run/dpdk/spdk_pid85481 00:24:18.345 Removing: /var/run/dpdk/spdk_pid87769 00:24:18.345 Removing: /var/run/dpdk/spdk_pid88150 00:24:18.345 Removing: /var/run/dpdk/spdk_pid88152 00:24:18.345 Removing: /var/run/dpdk/spdk_pid88475 00:24:18.345 Removing: /var/run/dpdk/spdk_pid88489 00:24:18.345 Removing: /var/run/dpdk/spdk_pid88508 00:24:18.345 Removing: /var/run/dpdk/spdk_pid88539 00:24:18.345 Removing: /var/run/dpdk/spdk_pid88544 00:24:18.345 Removing: /var/run/dpdk/spdk_pid88629 00:24:18.345 Removing: /var/run/dpdk/spdk_pid88637 00:24:18.345 Removing: /var/run/dpdk/spdk_pid88745 00:24:18.345 Removing: /var/run/dpdk/spdk_pid88747 00:24:18.345 Removing: /var/run/dpdk/spdk_pid88855 00:24:18.345 Removing: /var/run/dpdk/spdk_pid88863 00:24:18.345 Removing: /var/run/dpdk/spdk_pid89304 00:24:18.345 Removing: /var/run/dpdk/spdk_pid89347 00:24:18.345 Removing: /var/run/dpdk/spdk_pid89456 00:24:18.345 Removing: /var/run/dpdk/spdk_pid89535 00:24:18.345 Removing: /var/run/dpdk/spdk_pid89885 00:24:18.345 Removing: /var/run/dpdk/spdk_pid90074 00:24:18.345 Removing: /var/run/dpdk/spdk_pid90501 00:24:18.345 Removing: /var/run/dpdk/spdk_pid91055 00:24:18.345 Removing: /var/run/dpdk/spdk_pid91903 00:24:18.345 Removing: /var/run/dpdk/spdk_pid92546 00:24:18.345 Removing: /var/run/dpdk/spdk_pid92549 00:24:18.345 Removing: /var/run/dpdk/spdk_pid94545 00:24:18.345 Removing: /var/run/dpdk/spdk_pid94598 00:24:18.346 Removing: /var/run/dpdk/spdk_pid94645 00:24:18.346 Removing: /var/run/dpdk/spdk_pid94701 00:24:18.346 Removing: /var/run/dpdk/spdk_pid94809 00:24:18.346 Removing: /var/run/dpdk/spdk_pid94856 00:24:18.346 Removing: /var/run/dpdk/spdk_pid94909 00:24:18.346 Removing: /var/run/dpdk/spdk_pid94956 00:24:18.346 Removing: /var/run/dpdk/spdk_pid95321 00:24:18.346 Removing: /var/run/dpdk/spdk_pid96532 00:24:18.346 Removing: /var/run/dpdk/spdk_pid96673 00:24:18.346 Removing: /var/run/dpdk/spdk_pid96905 00:24:18.346 Removing: /var/run/dpdk/spdk_pid97487 00:24:18.346 Removing: /var/run/dpdk/spdk_pid97647 00:24:18.346 Removing: /var/run/dpdk/spdk_pid97805 00:24:18.346 Removing: /var/run/dpdk/spdk_pid97902 00:24:18.346 Removing: /var/run/dpdk/spdk_pid98058 00:24:18.346 Removing: /var/run/dpdk/spdk_pid98167 00:24:18.346 Removing: /var/run/dpdk/spdk_pid98869 00:24:18.346 Removing: /var/run/dpdk/spdk_pid98903 00:24:18.346 Removing: /var/run/dpdk/spdk_pid98934 00:24:18.346 Removing: /var/run/dpdk/spdk_pid99189 00:24:18.346 Removing: /var/run/dpdk/spdk_pid99219 00:24:18.346 Removing: /var/run/dpdk/spdk_pid99254 00:24:18.346 Removing: /var/run/dpdk/spdk_pid99722 00:24:18.346 Removing: /var/run/dpdk/spdk_pid99733 00:24:18.605 Removing: /var/run/dpdk/spdk_pid99969 00:24:18.605 Clean 00:24:18.605 02:05:29 -- common/autotest_common.sh@1453 -- # return 0 00:24:18.605 02:05:29 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:24:18.605 02:05:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:18.605 02:05:29 -- common/autotest_common.sh@10 -- # set +x 00:24:18.605 02:05:29 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:24:18.605 02:05:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:18.605 02:05:29 -- common/autotest_common.sh@10 -- # set +x 00:24:18.605 02:05:29 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:18.605 02:05:29 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:24:18.605 02:05:29 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:24:18.605 02:05:29 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:24:18.605 02:05:29 -- spdk/autotest.sh@398 -- # hostname 00:24:18.605 02:05:29 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:24:18.864 geninfo: WARNING: invalid characters removed from testname! 00:24:40.861 02:05:50 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:43.396 02:05:53 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:45.932 02:05:56 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:48.466 02:05:58 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:51.003 02:06:01 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:53.541 02:06:03 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:55.446 02:06:06 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:24:55.446 02:06:06 -- spdk/autorun.sh@1 -- $ timing_finish 00:24:55.446 02:06:06 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:24:55.446 02:06:06 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:24:55.446 02:06:06 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:24:55.446 02:06:06 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:55.705 + [[ -n 6001 ]] 00:24:55.705 + sudo kill 6001 00:24:55.714 [Pipeline] } 00:24:55.729 [Pipeline] // timeout 00:24:55.734 [Pipeline] } 00:24:55.747 [Pipeline] // stage 00:24:55.751 [Pipeline] } 00:24:55.765 [Pipeline] // catchError 00:24:55.773 [Pipeline] stage 00:24:55.775 [Pipeline] { (Stop VM) 00:24:55.786 [Pipeline] sh 00:24:56.066 + vagrant halt 00:24:59.354 ==> default: Halting domain... 00:25:05.933 [Pipeline] sh 00:25:06.212 + vagrant destroy -f 00:25:09.502 ==> default: Removing domain... 00:25:09.512 [Pipeline] sh 00:25:09.791 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:25:09.800 [Pipeline] } 00:25:09.814 [Pipeline] // stage 00:25:09.819 [Pipeline] } 00:25:09.831 [Pipeline] // dir 00:25:09.836 [Pipeline] } 00:25:09.849 [Pipeline] // wrap 00:25:09.854 [Pipeline] } 00:25:09.866 [Pipeline] // catchError 00:25:09.874 [Pipeline] stage 00:25:09.876 [Pipeline] { (Epilogue) 00:25:09.887 [Pipeline] sh 00:25:10.191 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:15.482 [Pipeline] catchError 00:25:15.484 [Pipeline] { 00:25:15.500 [Pipeline] sh 00:25:15.784 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:16.041 Artifacts sizes are good 00:25:16.049 [Pipeline] } 00:25:16.065 [Pipeline] // catchError 00:25:16.077 [Pipeline] archiveArtifacts 00:25:16.085 Archiving artifacts 00:25:16.208 [Pipeline] cleanWs 00:25:16.219 [WS-CLEANUP] Deleting project workspace... 00:25:16.219 [WS-CLEANUP] Deferred wipeout is used... 00:25:16.225 [WS-CLEANUP] done 00:25:16.227 [Pipeline] } 00:25:16.244 [Pipeline] // stage 00:25:16.249 [Pipeline] } 00:25:16.266 [Pipeline] // node 00:25:16.272 [Pipeline] End of Pipeline 00:25:16.314 Finished: SUCCESS